title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Tian_GradICON_Approximate_Diffeomorphisms_via_Gradient_Inverse_Consistency_CVPR_2023 | Abstract
We present an approach to learning regular spatial trans-
formations between image pairs in the context of medical
image registration. Contrary to optimization-based registra-
tion techniques and many modern learning-based methods,
we do not directly penalize transformation irregularities but
instead promote transformation regularity via an inverse
consistency penalty. We use a neural network to predict a
map between a source and a target image as well as the map
when swapping the source and target images. Different from
existing approaches, we compose these two resulting maps
and regularize deviations of the Jacobian of this composi-
tion from the identity matrix. This regularizer – GradICON –
results in much better convergence when training registra-
tion models compared to promoting inverse consistency of
the composition of maps directly while retaining the desir-
able implicit regularization effects of the latter. We achieve
state-of-the-art registration performance on a variety of real-
world medical image datasets using a single set of hyperpa-
rameters and a single non-dataset-specific training protocol.
Code is available at https://github.com/uncbiag/ICON .
| 1. Introduction
Image registration is a key component in medical image
analysis to estimate spatial correspondences between image
pairs [14, 53]. Applications include estimating organ motion
between treatment fractions in radiation therapy [25, 37],
capturing disease progression [64], or allowing for localized
analyses in a common coordinate system [19].
Many different registration algorithms have been pro-
posed over the last decades in medical imaging [10, 41, 44,
63, 64] and in computer vision [21, 33]. Contributions have
focused on different transformation models ( i.e., what types
of transformations are considered permissible), similarity
measures ( i.e., how “good alignment” between image pairs
is quantified), and solution strategies ( i.e., how transforma-
tion parameters are numerically estimated). The respective
choices are generally based on application requirements as
*Equal Contribution.
OAI
HCP
COPDGeneFigure 1. Example source ( left), target ( middle ) and warped source
(right ) images obtained with our method, trained with a single
protocol , using the proposed GradICON regularizer.
well as assumptions about image appearance and the ex-
pected transformation space. In consequence, while reli-
able registration algorithms have been developed for trans-
formation models ranging from simple parametric models
(e.g., rigid and affine transformations) to significantly more
complex nonparametric formulations [41, 44, 63] that allow
highly localized control, practical applications of registration
typically require many choices and rely on significant pa-
rameter tuning to achieve good performance. Recent image
registration work has shifted the focus from solutions based
on numerical optimization for a specific image pair to learn-
ing to predict transformations based on large populations of
image pairs via neural networks [10,15,17,34,35,56,57,68].
However, while numerical optimization is now replaced by
training a regression model which can be used to quickly pre-
dict transformations at test time, parameter tuning remains a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18084
key challenge as loss terms for these two types of approaches
are highly related (and frequently the same). Further, one
also has additional choices regarding network architectures.
Impressive strides have been made in optical flow estima-
tion as witnessed by the excellent performance of recent
approaches [34] on Sintel [7]. However, our focus is medical
image registration, where smooth and often diffeomorphic
transformations are desirable; here, a simple-to-use learning-
based registration approach, which can adapt to different
types of data, has remained elusive. In particular, nonpara-
metric registration approaches require a balance between
image similarity and regularization of the transformation to
assure good matching at a high level of spatial regularity,
as well as choosing a suitable regularizer. This difficulty is
compounded in a multi-scale approach where registrations
at multiple scales are used to avoid poor local solutions.
Instead of relying on a complex spatial regularizer, the
recent ICON approach [23] uses only inverse consistency to
regularize the sought-after transformation map, thereby dra-
matically reducing the number of hyperparameters to tune.
While inverse consistency is not a new concept in image
registration and has been explored to obtain transformations
that are inverses of each other when swapping the source
and the target images [11], ICON [23] has demonstrated that
a sufficiently strong inverse consistency penalty, by itself, is
sufficient for spatial regularity when used with a registration
network. Further, as ICON does not explicitly penalize spatial
gradients of the deformation field, it does not require pre-
registration ( e.g., rigid or affine), unlike many other related
works. However, while conceptually attractive, ICON suf-
fers from the following limitations: 1) training convergence
is slow, rendering models costly to train; and 2) enforcing
approximate inverse consistency strictly enough to prevent
folds becomes increasingly difficult at higher spatial res-
olutions, necessitating a suitable schedule for the inverse
consistency penalty, which is not required for GradICON .
Our approach is based on a surprisingly simple, but ef-
fective observation: penalizing the Jacobian of the inverse
consistency condition instead of inverse consistency directly1
applies zero penalty for inverse consistent transform pairs but
1) yields significantly improved convergence, 2) no longer re-
quires careful scheduling of the inverse consistency penalty,
3) results in spatially regular maps, and 4) improves regis-
tration accuracy. These benefits facilitate a unified training
protocol with the same network structure, regularization
parameter, and training strategy across registration tasks.
Our contributions are as follows:
•We develop GradICON (Grad ientInverse CONsistency), a
versatile regularizer for learning-based image registration
that relies on penalizing the Jacobian of the inverse consis-
1i.e., penalizing deviations from ∇(ΦAB
θ◦ΦBA
θ−Id) = 0 instead of
deviations from ΦAB
θ◦ΦBA
θ−Id = 0 .tency constraint and results, empirically and theoretically,
in spatially well-regularized transformation maps.
•We demonstrate state-of-the-art (SOTA) performance of
models trained with GradICON on three large medical
datasets: a knee magnetic resonance image (MRI) dataset
of the Osteoarthritis Initiative (OAI) [46], the Human Con-
nectome Project’s collection of Young Adult brain MRIs
(HCP) [60], and a computed tomography (CT) inhale/ex-
hale lung dataset from COPDGene [47].
|
Stegmuller_CrOC_Cross-View_Online_Clustering_for_Dense_Visual_Representation_Learning_CVPR_2023 | Abstract
Learning dense visual representations without labels is
an arduous task and more so from scene-centric data. We
propose to tackle this challenging problem by proposing a
Cross-view consistency objective with an Online Clustering
mechanism (CrOC) to discover and segment the semantics
of the views. In the absence of hand-crafted priors, the re-
sulting method is more generalizable and does not require
a cumbersome pre-processing step. More importantly, the
clustering algorithm conjointly operates on the features of
both views, thereby elegantly bypassing the issue of content
not represented in both views and the ambiguous matching
of objects from one crop to the other. We demonstrate excel-
lent performance on linear and unsupervised segmentation
transfer tasks on various datasets and similarly for video
object segmentation. Our code and pre-trained models are
publicly available at https://github.com/stegmuel/CrOC .
| 1. Introduction
Self-supervised learning (SSL) has gone a long and suc-
cessful way since its beginning using carefully hand-crafted
proxy tasks such as colorization [ 26], jigsaw puzzle solv-
ing [32], or image rotations prediction [ 14]. In recent years,
a consensus seems to have been reached, and cross-view
consistency is used in almost all state-of-the-art (SOTA) vi-
sual SSL methods [ 5–7,15,19]. In that context, the whole
training objective revolves around the consistency of rep-
resentation in the presence of information-preserving trans-
formations [ 7], e.g., blurring ,cropping ,solarization , etc.
Although this approach is well grounded in learning image-
level representations in the unrealistic scenario of object-
centric datasets, e.g., ImageNet [ 11], it cannot be trivially
extended to accommodate scene-centric datasets and even
* denotes equal contribution.
Figure 1. Schematic for different categories of self-supervised
learning methods for dense downstream tasks. a) Prior to the
training, a pre-trained model or color-based heuristic is used to
produce the clustering/matching of the whole dataset. c) The
matching/clustering is identified online but restrains the domain
of application of the loss to the intersection of the two views. b)
Our method takes the best of both worlds, leverages online cluster-
ing, and enforces constraints on the whole spatiality of the views.
less to learn dense representations. Indeed, in the presence
of complex scene images, the random cropping operation
used as image transformation loses its semantic-preserving
property, as a single image can yield two crops bearing an-
tipodean semantic content [ 31,35–37]. Along the same line,
it’s not clear how to relate sub-regions of the image from
one crop to the other, which is necessary to derive a local-
ized supervisory signal.
To address the above issue, some methods [ 31,36] con-
strain the location of the crops based on some heuristics and
using a pre-processing step. This step is either not learnable
or requires the use of a pre-trained model. Alternatively, the
location of the crops ( geometric pooling [45,51]) and/or an
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7000
attention mechanism ( attentive pooling [33,42,44,48,51])
can be used to infer the region of overlap in each view
and only apply the consistency objective to that region
(Fig. 1.c). A consequence of these pooling mechanisms is
that only a sub-region of each view is exploited, which mis-
lays a significant amount of the image and further questions
the usage of cropping . There are two strategies to tackle the
issue of locating and linking the objects from the two views:
the first is a feature-level approach that extends the global
consistency criterion to the spatial features after inferring
pairs of positives through similarity bootstrapping or posi-
tional cues [ 2,28,30,41,48,51]. It is unclear how much
semantics a single spatial feature embeds, and this strat-
egy can become computationally intensive. These issues
motivate the emergence of the second line of work which
operates at the object-level [ 20,21,38,40,43,44,47]. In
that second scenario, the main difficulty lies in generating
the object segmentation masks and matching objects from
one view to the other. The straightforward approach is to
leverage unsupervised heuristics [ 20] or pre-trained mod-
els [47] to generate pseudo labels prior to the training phase
(Fig. 1.a), which is not an entirely data-driven approach
and cannot be trivially extended to any modalities. Alter-
natively, [ 21] proposed to use K-Means and an additional
global image (encompassing the two main views) to gener-
ate online pseudo labels, but this approach is computation-
ally intensive.
To address these limitations, we propose CrOC, whose
underpinning mechanism is an efficient Cross-view Online
Clustering that conjointly generates segmentation masks for
the union of both views (Fig. 1.b).
Our main contributions are: 1) we propose a novel
object-level self-supervised learning framework that lever-
ages an online clustering algorithm yielding segmentation
masks for the union of two image views. 2) The intro-
duced method is inherently compatible with scene-centric
datasets and does not require a pre-trained model. 3) We
empirically and thoroughly demonstrate that our approach
rivals or out-competes existing SOTA self-supervised meth-
ods even when pre-trained in an unfavorable setting (smaller
and more complex dataset).
|
Tilmon_Energy-Efficient_Adaptive_3D_Sensing_CVPR_2023 | Abstract
Active depth sensing achieves robust depth estimation
but is usually limited by the sensing range. Naively increas-
ing the optical power can improve sensing range but in-
duces eye-safety concerns for many applications, including
autonomous robots and augmented reality. In this paper, we
propose an adaptive active depth sensor that jointly opti-
mizes range, power consumption, and eye-safety. The main
observation is that we need not project light patterns to the
entire scene but only to small regions of interest where depth
is necessary for the application and passive stereo depth es-
timation fails. We theoretically compare this adaptive sens-
ing scheme with other sensing strategies, such as full-frame
projection, line scanning, and point scanning. We show
that, to achieve the same maximum sensing distance, the
proposed method consumes the least power while having
the shortest (best) eye-safety distance. We implement this
adaptive sensing scheme with two hardware prototypes, one
with a phase-only spatial light modulator (SLM) and the
other with a micro-electro-mechanical (MEMS) mirror and
diffractive optical elements (DOE). Experimental results
validate the advantage of our method and demonstrate its
capability of acquiring higher quality geometry adaptively.
Please see our project website for video results and code:
https://btilmon.github.io/e3d.html .
| 1. Introduction
Active 3D depth sensors have diverse applications in
augmented reality, navigation, and robotics. Recently, these
sensor modules are widely used in consumer products, such
as time-of-flight ( e.g., Lidar [15]), structured light ( e.g.,
Kinect V1 [18]) and others. In addition, many computer
vision algorithms have been proposed to process the ac-
quired data for downstream tasks such as 3D semantic un-
derstanding [29], object tracking [17], guided upsampling
in SLAM [24], etc.
*Work done during internship at Snap Research.
†Co-corresponding authors
Low powerShorteye-safety distanceLongsensible distance(a)Depthsensingonwearablesfaceschallenges
CameraProjectorCamera(b)OursystemwithadaptiveprojectionFullpatternLinescanningAdaptivepattern(ours)Required power (↓ is better)Eye-safety distance (↓ is better)(c)ComparisonWhen distance is the same
SceneThe laser energy is redistributed to the ROI.
Figure 1. Energy-efficient adaptive 3D sensing. (a) Depth
sensing devices have three key optimization goals: minimizing
the power consumption and eye-safety distance while maximiz-
ing sensing distance. However, these goals are coupled to each
other. (b)We propose a novel adaptive sensing method with an
active stereo setup and a projector that can redistribute the light
energy and project the pattern only to the required regions (e.g.,
textureless regions). (c)The proposed approach outperforms pre-
vious methods including full-frame pattern systems (like Intel Re-
alSense) and line-scanning systems (like Episcan3D [22]): When
the maximum sensing distance is the same, the required power is
much less and the eye-safety distance is also shorter.
Unlike stereo cameras that only sense reflected ambient
light passively, active depth sensors illuminate the scene
with modulated light patterns, either spatially, temporally,
or both. The illumination encodings allow robust estima-
tion of scene depths. However, this also leads to three
shortcomings: First, active depth sensors consume optical
power, burdening wearable devices that are on a tight power
budget. Second, the number of received photons reflected
back from the scene drops with inverse-square relationship
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5054
Figure 2. Method overview. Our system consists of a stereo-camera pair and an adaptive projector. The system first captures an image of
the scene (Step 1) and then computes an attention map to determine the ROI (Step 2). This attention map is used to compute the control
signal for the specific projector implementation (Step 3) such that light is only projected to the ROI (Step 4). A depth map is then computed
from the new stereo images, which can be used for applications such as augmented reality (Step 5).
to scene depth. The maximum sensing distance is thereby
limited by the received signal-to-noise ratio (SNR). Third,
strong, active light sources on the device may unintention-
ally hurt the user or other people around. For consumer
devices, this constraint can be as strict as ensuring safety
when a baby accidentally stares at the light source directly.
Interestingly, these three factors are often entangled with
each other. For example, naively increasing range by rais-
ing optical power makes the device less eye-safe. An active
3d sensor would benefit from the joint optimization of these
three goals, as illustrated in Fig. 1(a).
In this paper, we present an adaptive depth-sensing strat-
egy. Our key idea is that the coded scene illumination need
not be sent to the entire scene (Fig. 1(b)). Intuitively, by lim-
iting the illumination samples, the optical power per sample
is increased, therefore extending the maximum sensing dis-
tance. This idea of adaptive sensing is supported by three
observations: First, illumination samples only need to be
sent to parts of a scene where passive depth estimation fails
(e.g., due to lack of texture). Second, depth estimation
is often application-driven, e.g., accurate depths are only
needed around AR objects to be inserted into the scene. Fi-
nally, for video applications, sending light to regions where
depths are already available from previous frames is re-
dundant. Based on these observations, we demonstrate
this adaptive idea with a stereo-projector setup ( i.e., active
stereo [4, 9, 34]), where an attention map is computed from
the camera images for efficient light redistribution.
To quantitatively understand the benefits of our ap-
proach, we propose a sensor model that analytically char-
acterizes various sensing strategies, including full-frame
(e.g., RealSense [19]), line-scanning ( e.g., Episcan3D [22]),
point-scanning ( e.g., Lidar [25]) and proposed adaptive
sensing. We establish, for the first time, a framework that
jointly analyzes the power, range, and eye-safety of differ-
ent strategies and demonstrates that, for the same maximum
sensing distance, adaptive sensing consumes the least power
while achieving the shortest (best) eye-safety distance.Note that the realization of scene-adaptive illumination
is not trivial: Common off-the-shelf projectors simply block
part of the incident light, which wastes optical power. We
propose two hardware implementations for adaptive illumi-
nation: One is inspired by digital holography, which uses
Liquid Crystal on Silicon (LCoS) Spatial Light Modula-
tors (SLM) to achieve free-form light projection. The other
implementation uses diffractive optical elements (DOE) to
generate dot patterns in a local region of interest (ROI),
which is directed to different portions of the scene by a
micro-electro-mechanical (MEMS) mirror.
Our contributions are summarized as follows:
•Adaptive 3D sensing theory : We propose adaptive
3D sensing and demonstrate its advantage in a theo-
retical framework that jointly considers range, power
and eye-safety.
•Hardware implementation : We implement the pro-
posed adaptive active stereo approach with two hard-
ware prototypes based on SLM and MEMS + DOE.
•Experimental validation : Real-world experimental
results validate that our sensor can adapt to the scene
and outperform existing sensing strategies.
|
Sun_Bi-Directional_Feature_Fusion_Generative_Adversarial_Network_for_Ultra-High_Resolution_Pathological_CVPR_2023 | Abstract
The cost of pathological examination makes virtual re-
staining of pathological images meaningful. However, due
to the ultra-high resolution of pathological images, tradi-
tional virtual re-staining methods have to divide a WSI im-
age into patches for model training and inference. Such a
limitation leads to the lack of global information, result-
ing in observable differences in color, brightness and con-
trast when the re-stained patches are merged to generate
an image of larger size. We summarize this issue as the
square effect. Some existing methods try to solve this is-
sue through overlapping between patches or simple post-
processing. But the former one is not that effective, while
the latter one requires carefully tuning. In order to elim-
inate the square effect, we design a bi-directional feature
fusion generative adversarial network (BFF-GAN) with a
global branch and a local branch. It learns the inter-
patch connections through the fusion of global and local
features plus patch-wise attention. We perform experiments
on both the private dataset RCC and the public dataset AN-
HIR. The results show that our model achieves competitive
performance and is able to generate extremely real images
that are deceptive even for experienced pathologists, which
means it is of great clinical significance.
| 1. Introduction
Pathological examination is the primary method of can-
cer diagnosis. Different dyes can interact with different
components in tissues or cells, making it easier to distin-
*Zhineng Chen is the corresponding author.guish different microstructures, abnormal substances and
lesions. Among various staining methods, the most com-
mon and basic one is the hematoxylin-eosin (HE) staining.
However, given the result of HE staining, it is not always
enough to make a diagnosis. Therefore, immunohistochem-
istry (IHC) staining based on specific binding of antigen and
antibody is also necessary in diagnosis, even though it is
complex, time-consuming and expensive [7, 25].
Due to the cost of IHC, some researchers have tried to
generate one type of staining images from another type
(usually HE) via computational methods. This can reduce
the consumption of materials, money and time during di-
agnosis. Such a task is usually called virtual re-staining.
This task is close to the style transfer of natural images, so
it is possible to apply style transfer methods to virtual re-
staining. Since pathological images are usually unpaired,
virtual re-staining is generally done by unsupervised meth-
ods, such as [4, 19,22]. These approaches are all based
on style transfer models for natural images. Researchers
made some improvements according to the characteristics
of pathological images, and finally achieved better results.
However, on the other hand, pathological images have
their own characteristics. For example, the reliability of the
results is more critical for this task due to the clinical sig-
nificance of pathological examination. Meanwhile, the res-
olution of pathological images is usually higher than that of
natural images, reaching 10k×10kor more. Therefore, vir-
tual re-staining requires additional computational resources
as the GPU memory is limited. Most of the existing vir-
tual re-staining models solve this problem by splitting WSI
(whole-slide imaging) images into smaller patches for train-
ing and inference, and then incorporating these patches into
WSI images through post-processing. This results in dif-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3904
Figure 1. Illustration of the square effect. (a) a real 1600×1600
HE-stained image. (b) a virtually re-stained CK7 image obtained
by merging separately re-stained 400×400 patches (using Cy-
cleGAN) without overlapping. (c) image obtained by merging
448×448 patches with an overlap of 64 pixels. (d) the result
generated by our BFF-GAN.
ferences in color and brightness between adjacent patches,
which we call the square effect. As shown in Fig. 1, (a) is
a real HE image, (b) and (c) are CK7 images re-stained by
CycleGAN without and with an overlap. Generally, over-
lapping is used to solve this problem, but we can see that
the square effect always exists no matter whether there is an
overlap. Meanwhile, the result of our BFF-GAN is shown
in (d), and it is not easy to find the square effect in it.
Indeed, the square effect exists because patch-based vir-
tual re-staining lacks global information, resulting in mis-
matches in hue, contrast and brightness between adjacent
patches, especially for the regions with different tissue
structures and the boundary regions. In addition, since the
re-staining of each patch is independent, there may also be
color differences between the re-staining results of patches
with similar tissue structures. Most existing studies did not
consider the global information, leading to serious square
effect. To solve this problem, [18] proposed perceptual
embedding consistency (PEC) loss, which forces the gen-
erator to learn features like color, brightness and contrast.
But it is hard to say only using the PEC loss on the patch
level can solve this problem to what extent. Subsequently,
URUST [11] attempted to correct the mean value and stan-
dard deviation of the central patch with those of the sur-
rounding patches. However, the parameters of this method
are artificially designed and may not generalize well.
In the natural image domain, some researchers have at-
tempted to get improvement through context aggregation.
For example, PSPNet [39] improved the performance of se-
mantic segmentation by increasing the receptive field with
pooling kernels of different sizes. HRNet [32] designed par-
allel branches with different resolutions and integrated fea-
ture maps between different branches, achieving impressive
performance in a number of visual tasks. GLNet [6] com-
bined feature maps of the entire image with those of patches
to improve segmentation performance of high-resolution
images. These methods obtained nulti-scale information
through feature fusion, and worked well on multiple tasks.
Based on these observations, in this paper, we proposea model that combines global-local features to learn the re-
lationship between patches to solve the square effect, and
meanwhile, bypasses the memory constraint for ultra-high
resolution images. We design an architecture that consists
of a global branch and a local branch, where the former
takes the down-sampled whole images as input, and the lat-
ter takes the patch-level images coming in batches as input.
The two branches perform feature fusion in both directions
in the encoder and use patch-wise attention and skip con-
nections to enhance feature expression capability. Finally,
we fuse the features of the two branches to output the re-
staining results. To verify the effectiveness of the method,
extensive experiments were conducted on the private dataset
RCC and the public dataset ANHIR [3]. The results show
that our model achieves good performance on a variety of
metrics, not only significantly eliminating the square effect,
but also being generalizable to various datasets. Mean-
while, subjective experiments have also demonstrated the
clinical significance of our model. In summary, our main
contributions are listed as follows:
- The square effect significantly influences the quality
of the virtually re-stained images. Thus, we propose
to solve the square effect through the fusion of global
and local features. Such an idea can be used in various
networks, not only CycleGAN, but also other more ad-
vanced style transfer models.
- We propose a model with feature fusion between two
branches called BFF-GAN to learn the inter-patch re-
lations. To our knowledge, it is the first network for
style transfer of ultra-high resolution images.
- Our proposed BFF-GAN achieves impressive results.
It is of great clinical significance and can be general-
ized to various datasets.
|
Tian_CABM_Content-Aware_Bit_Mapping_for_Single_Image_Super-Resolution_Network_With_CVPR_2023 | Abstract
With the development of high-definition display devices,
the practical scenario of Super-Resolution (SR) usually
needs to super-resolve large input like 2K to higher reso-
lution (4K/8K). To reduce the computational and memory
cost, current methods first split the large input into local
patches and then merge the SR patches into the output.
These methods adaptively allocate a subnet for each patch.
Quantization is a very important technique for network ac-
celeration and has been used to design the subnets. Current
methods train an MLP bit selector to determine the propoer
bit for each layer. However, they uniformly sample subnets
for training, making simple subnets overfitted and compli-
cated subnets underfitted. Therefore, the trained bit selector
fails to determine the optimal bit. Apart from this, the in-
troduced bit selector brings additional cost to each layer of
the SR network. In this paper, we propose a novel method
named Content-Aware Bit Mapping (CABM), which can re-
move the bit selector without any performance loss. CABM
also learns a bit selector for each layer during training. Af-
ter training, we analyze the relation between the edge in-
formation of an input patch and the bit of each layer. We
observe that the edge information can be an effective metric
for the selected bit. Therefore, we design a strategy to build
an Edge-to-Bit lookup table that maps the edge score of a
patch to the bit of each layer during inference. The bit con-
figuration of SR network can be determined by the lookup
tables of all layers. Our strategy can find better bit configu-
ration, resulting in more efficient mixed precision networks.
We conduct detailed experiments to demonstrate the gener-
alization ability of our method. The code will be released.
*This work was supported by the Fundamental Research Funds for the
Central Universities (2022JBMC013), the National Natural Science Foun-
dation of China (61976017 and 61601021), and the Beijing Natural Sci-
ence Foundation (4202056). Shunli Zhang is the corresponding author. | 1. Introduction
Single Image Super-Resolution (SISR) is an important
computer vision task that reconstructs a High-Resolution
(HR) image from a Low-Resolution (LR) image. With the
advent of Deep Neural Networks (DNNs), lots of DNN-
based SISR methods have been proposed over the past few
years [6, 17, 24, 27, 34]. While in real-world usages, the
resolutions of display devices have already reached 4K or
even 8K. Apart from normal 2D images, the resolutions
of omnidirectional images might reach even 12K or 16K.
Therefore, SR techniques with large input are becoming
crucial and have gained increasing attention from the com-
munity [4, 12, 18, 28].
Since the memory and computational cost will grow
quadratically with the input size, existing methods [4, 12,
18,28] first split the large input into patches and then merge
the SR patches to the output. They reduce the computational
cost by allocating simple subnets to those flat regions while
using heavy subnets for those detailed regions. Therefore,
how to design the subnets is very important for these meth-
ods. [4, 18] empirically decide the optimal channels after
lots of experiments to construct the subnets. [28] proposes
to train a regressor to predict the incremental capacity of
each layer. Thus they can adaptively construct the subnets
by reducing the layers. Compared with pruning the chan-
nels or layers, quantization is another promising technique
and can achieve more speedup. [12] trains an MLP bit selec-
tor to determine the proper bit for each layer given a patch.
However, the introduced MLP of each layer brings addi-
tional computational and storage cost. Besides, we observe
that [12] uniformly samples the subnets for training, mak-
ing simple subnets (low average bit or flat patches) tend to
overfit the inputs while complicated subnets (high average
bit or detailed patches) tend to underfit the inputs. There-
fore, uniform sampling fails to determine the optimal bit for
each layer.
To solve the limitations of [12], we propose a novel
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1756
𝑝!𝑝"𝑝#𝑝$𝑄!𝑄"𝑄#𝑄$QConv
ReLUQConvReLUMLPResBlock𝑀!"Supernet
C!∗#Supernet
QConvReLUQConvReLUResBlock
𝑄!𝑄"𝑄#𝑄$LUT
Calculate edgeTraining samples indexscore
Fine-tuneTrain
Figure 1. The pipeline of our CABM method. pi∈ {pi}i=1...Kis the probability of choosing ithquantization module and each quantiza-
tion module uses different bit-width to quantize the input activation. During training, our method learns an MLP bit selector to adaptively
choose the bit-width for each convolution. While during inference, we use the proposed CABM to build an Edge-to-Bit lookup table to
determine the bit-width with negligible additional cost.
method named Content-Aware Bit Mapping (CABM),
which directly uses a lookup table to generate the bit of each
layer during inference. However, building a lookup table
is difficult since there are thousands of patches and corre-
sponding select bits. We observe that edge information can
be an effective metric for patch representation. Therefore,
we analyze the relation between the edge information of a
patch and the bit of each layer. Inspired by the fact that a
MLP selector learns the nonlinear mapping between a patch
and the bit, instead of building the Edge-to-Bit lookup ta-
ble based on linear mapping, we design a tactful calibration
strategy to map the edge score of a patch to the bit of each
layer. The bit configuration of SR network can be deter-
mined by the lookup tables of all layers. Our CABM can
achieve the same performance compared with the MLP se-
lectors while resulting in a lower average bit and negligi-
ble additional computational cost. Our contributions can be
concluded as follows:
• We propose a novel method that maps edge informa-
tion to bit configuration of SR networks, significantly
reducing the memory and computational cost of bit se-
lectors without performance loss.
• We present a tactful calibration strategy to build the
Edge-to-Bit lookup tables, resulting in a lower average
bit for SR networks.
• We conduct detailed experiments to demonstrate the
generalization ability of our method based on various
SR architectures and scaling factors. |
Tang_FLAG3D_A_3D_Fitness_Activity_Dataset_With_Language_Instruction_CVPR_2023 | Abstract
With the continuously thriving popularity around the
world, fitness activity analytic has become an emerging re-
search topic in computer vision. While a variety of new
tasks and algorithms have been proposed recently, there
are growing hunger for data resources involved in high-
quality data, fine-grained labels, and diverse environments.
In this paper, we present FLAG3D, a large-scale 3D fit-
ness activity dataset with language instruction containing
180K sequences of 60 categories. FLAG3D features the
following three aspects: 1) accurate and dense 3D hu-
man pose captured from advanced MoCap system to handle
the complex activity and large movement, 2) detailed and
professional language instruction to describe how to per-
form a specific activity, 3) versatile video resources from
a high-tech MoCap system, rendering software, and cost-
effective smartphones in natural environments. Extensive
experiments and in-depth analysis show that FLAG3D con-
tributes great research value for various challenges, suchas cross-domain human action recognition, dynamic human
mesh recovery, and language-guided human action genera-
tion. Our dataset and source code are publicly available at
https://andytang15.github.io/FLAG3D .
| 1. Introduction
With the great demand of keeping healthy, reducing high
pressure from working and staying in shape, fitness activity
has become more and more important and popular during
the past decades [22]. According to the statistics1, there are
over 200,000 fitness clubs and 170 million club members all
over the world. More recently, because of the high expense
of coaches and out-breaking of COVID-19, increasing peo-
ple choose to exclude gym membership and do the work-
out by watching the fitness instructional videos from fitness
apps or YouTube channels ( e.g., FITAPP, ATHLEAN-X,
The Fitness Marshall, etc.). Therefore, it is desirable to ad-
1https://policyadvice.net/insurance/insights/fitness-industry-statistics
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22106
Table 1. Comparisons of FLAG3D with the relevant datasets. FLAG3D consists of 180K sequences ( Seqs ) of 60 fitness activity categories
(Cats ). It contains both low-level features, including 3D key points ( K3D) and SMPL parameters, as well as high-level language annotation
(LA) to instruct trainers, sharing merits of multiple resources from MoCap system in laboratory ( Lab), synthetic ( Syn. ) data by rendering
software and natural ( Nat. ) scenarios. We evaluate various tasks in this paper, including human action recognition ( HAR), human mesh
recovery ( HMR), and human action generation ( HAG), while more potential applications like human pose estimation ( HPE), repetitive action
counting ( RAC), action quality assessment and visual grounding could be explored in the future (see Section 5 for more details.).
Dataset Subjs Cats Seqs Frames LA K3D SMPL Resource Task
PoseTrack [7] - - 550 66K × × × Nat. HPE
Human3.6M [33] 11 17 839 3.6M ×✓ - Lab HAR,HPE,HMR
CMU Panoptic [37] 8 5 65 594K ×✓ - Lab HPE
MPI-INF-3DHP [57] 8 8 - >1.3M ×✓ - Lab+Nat. HPE,HMR
3DPW [96] 7 - 60 51k × × ✓ Nat. HMR
ZJU-MoCap [68] 6 6 9 >1k ×✓ ✓ Lab HAR,HMR
NTU RGB+D 120 [51] 106 120 114k - ×✓ - Lab HAR,HAG
HuMMan [11] 1000 500 400K 60M ×✓ ✓ Lab HAR,HMR
HumanML3D [26] - - 14K - ✓ ✓ ✓ Lab HAG
KIT Motion Language [71] 111 - 3911 - ✓ ✓ - Lab HAG
HumanAct12 [28] 12 12 1191 90K × × ✓ Lab HAG
UESTC [35] 118 40 25K >5M ×✓ - Lab HAR,HAG
Fit3D [22] 13 37 - >3M ×✓ ✓ Lab HPE,RAC
EC3D [115] 4 3 362 - ×✓ - Lab HAR
Yoga-82 [95] - 82 - 29K × × × Nat. HAR,HPE
FLAG3D (Ours) 10+10+4 60 180K 20M ✓ ✓ ✓ Lab+Syn.+Nat. HAR,HMR,HAG
vance current intelligent vision systems to assist people to
perceive, understand and analyze various fitness activities.
In recent years, a variety of datasets have been proposed
in the field [22, 95, 115], which have provided good bench-
marks for preliminary research. However, these datasets
might have limited capability to model complex poses,
describe a fine-grained activity, and generalize to differ-
ent scenarios. We present FLAG3D in this paper, a 3D
Fitness activity dataset with LAnGuage instruction. Fig-
ure 1 presents an illustration of our dataset, which con-
tains 180K sequences of 60 complex fitness activities ob-
tained from versatile sources, including a high-tech MoCap
system, professional rendering software, and cost-effective
smartphones. In particular, FLAG3D advances current re-
lated datasets from the following three aspects:
Highly Accurate and Dense 3D Pose. For fitness activ-
ity, there are various poses within lying, crouching, rolling
up, jumping etc., which involve heavy self-occlusion and
large movements. These complex cases bring inevitable ob-
stacles for conventional appearance-based or depth-based
sensors to capture the accurate 3D pose. To address this, we
set up an advanced MoCap system with 24 VICON cam-
eras [5] and professional MoCap clothes with 77 motion
markers to capture the trainers’ detailed and dense 3D pose.
Detailed Language Instruction. Most existing fitness
activity datasets merely provide a single action label or
phase for each action [95,115]. However, understanding fit-
ness activities usually requires more detailed descriptions.We collect a series of sentence-level language instructions
for describing each fine-grained movement. Introducing
language would also facilitate various research regarding
emerging multi-modal applications.
Diverse Video Resources. To advance the research di-
rectly into a more general field, we collect versatile videos
for FLAG3D. Besides the data captured from the expensive
MoCap system, we further provide the synthetic sequences
with high realism produced by rendering software and the
corresponding SMPL parameters. In addition, FLAG3D
also contains videos from natural real-world environments
obtained by cost-effective and user-friendly smartphones.
To understand the new challenges in FLAG3D, we eval-
uate a series of recent advanced approaches and set a
benchmark for various tasks, including skeleton-based ac-
tion recognition, human mesh recovery, and dynamic ac-
tion generation. Through the experiments, we find that 1)
while the state-of-the-art skeleton-based action recognition
methods have attained promising performance with highly
accurate MoCap data under the in-domain scenario, the re-
sults drop significantly under the out-domain scenario re-
garding the rendered and natural videos. 2) Current 3D pose
and shape estimation approaches easily fail on some poses,
such as kneeling and lying, owing to the self-occlusion.
FLAG3D provides accurate ground truth for these situa-
tions, which could improve current methods’ performance
in addressing challenging postures. 3) Motions generated
by state-of-the-art methods appear to be visually plausible
22107
and context-aware at the beginning. However, they cannot
follow the text description faithfully as time goes on.
To summarize, our contributions are twofold: 1) We
present a new dataset named FLAG3D with highly accu-
rate and dense 3D poses, detailed language instruction, and
diverse video resources, which could be used for multiple
tasks for fitness activity applications. 2) We conduct vari-
ous empirical studies and in-depth analyses of the proposed
FLAG3D, which sheds light on the future research of activ-
ity understanding to devote more attention to the generality
and the interaction with natural language.
|
Tendulkar_FLEX_Full-Body_Grasping_Without_Full-Body_Grasps_CVPR_2023 | Abstract
Synthesizing 3D human avatars interacting realistically
with a scene is an important problem with applications in
AR/VR, video games, and robotics. Towards this goal, we
address the task of generating a virtual human – hands and
full body – grasping everyday objects. Existing methods ap-
proach this problem by collecting a 3D dataset of humans
interacting with objects and training on this data. However,
1) these methods do not generalize to different object po-
sitions and orientations or to the presence of furniture in
the scene, and 2) the diversity of their generated full-body
poses is very limited. In this work, we address all the above
challenges to generate realistic, diverse full-body grasps in
everyday scenes without requiring any 3D full-body grasp-
ing data. Our key insight is to leverage the existence of
both full-body pose and hand-grasping priors, composing
them using 3D geometrical constraints to obtain full-body
grasps. We empirically validate that these constraints can
generate a variety of feasible human grasps that are supe-
rior to baselines both quantitatively and qualitatively. See
our webpage for more details: flex.cs.columbia.edu. | 1. Introduction
Generating realistic virtual humans is an exciting step
towards building better animation tools, video games, im-
mersive VR technology and more realistic simulators with
human presence. Towards this goal, the research commu-
nity has invested a lot of effort in collecting large-scale
3D datasets of humans [1–18]. However, the reliance on
data collection will be a major bottleneck when scaling to
broader scenarios, for two main reasons. First, data col-
lection using optical marker-based motion capture (MoCap)
systems is quite tedious to work with. This becomes even
more complicated when objects [19] or scenes [20] are in-
volved, requiring expertise in specialized hardware systems
[21–23], as well as commercial software [24–27]. Even
with the best combination of state-of-the-art solutions, this
process often requires multiple skilled technicians to ensure
clean data [19].
Second, it is practically impossible to capture all possible
ways of interacting with the ever-changing physical world.
The number of scenarios grows exponentially with every
considered variable (such as human pose, object class, task,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21179
or scene characteristics). For this reason, models trained on
task-specific datasets suffer from the limitations of the data.
For example, methods that are supervised on the GRAB
dataset [19] for full-body grasping [28, 29] fail to grasp ob-
jects when the object position and/or orientation is changed,
and generate poses with virtually no diversity. This is under-
standable since the GRAB dataset mostly consists of stand-
inghumans grasping objects at a fixed height , interacting
with them in a relatively small range of physical motions.
However in realistic scenarios, we expect to see objects in
all sorts of configurations - lying on the floor, on the top
shelf of a cupboard, inside a kitchen sink, etc.
To build human models that work in realistic scene con-
figurations, we need to fundamentally re-think how to solve
3D tasks without needing any additional data, effectively
utilizing existing data. In this paper, we address the task of
generating full-body grasps for everyday objects in realistic
household environments, by leveraging the success of hand
grasping models [19, 30–32] and recent advances in human
body pose modeling [1, 33].
Our key observation is that we can compose different
3D generative models via geometrical and anatomical con-
straints. Having a strong prior over full-body human poses
(knowing what poses are feasible and natural), when com-
bined with strong grasping priors, allows us to express full-
body grasping poses. This combination leads to full-body
poses which satisfy both priors resulting in natural poses
that are suited for grasping objects, as well as hand grasps
that human poses can easily match.
Our contributions are as follows. First, we propose
FLEX , a framework to generate full-body grasps without
full-body grasping data. Given a pre-trained hand-only
grasping model as well as a pre-trained body pose prior,
we search in the latent spaces of these models to generate
a human mesh whose hand matches that of the hand grasp,
while simultaneously handling the constraints imposed by
the scene, such as avoiding obstacles. To achieve this, we
introduce a novel obstacle-avoidance loss that treats the hu-
man as a connected graph which breaks when intersected by
an obstacle. In addition, we show both quantitatively and
qualitatively that FLEX allows us to generate a wide range
of natural grasping poses for a variety of scenarios, greatly
improving on previous approaches. Finally, we introduce
the ReplicaGrasp dataset, built by spawning 3D objects in-
side ReplicaCAD [34] scenes using Habitat [35].
|
Smith_ConStruct-VL_Data-Free_Continual_Structured_VL_Concepts_Learning_CVPR_2023 | Abstract
Recently, large-scale pre-trained Vision-and-Language
(VL) foundation models have demonstrated remarkable
capabilities in many zero-shot downstream tasks, achieving
competitive results for recognizing objects defined by as
little as short text prompts. However, it has also been
shown that VL models are still brittle in Structured VL
Concept (SVLC) reasoning, such as the ability to recognize
object attributes, states, and inter-object relations. This
leads to reasoning mistakes, which need to be corrected
as they occur by teaching VL models the missing SVLC
skills; often this must be done using private data where
the issue was found, which naturally leads to a data-free
continual (no task-id) VL learning setting. In this work,
we introduce the first Continual Data-Free Structured VL
Concepts Learning (ConStruct-VL) benchmark1and show
it is challenging for many existing data-free CL strategies.
We, therefore, propose a data-free method comprised of a
new approach of Adversarial Pseudo-Replay (APR) which
generates adversarial reminders of past tasks from past task
models. To use this method efficiently, we also propose a
continual parameter-efficient Layered-LoRA (LaLo) neural
architecture allowing no-memory-cost access to all past
models at train time. We show this approach outperforms
all data-free methods by as much as ∼7% while even
matching some levels of experience-replay (prohibitive for
applications where data-privacy must be preserved).
| 1. Introduction
Recently, large Vision-and-Language (VL) models
achieved great advances in zero-shot learning [4, 16, 16, 19,
23, 25, 40, 51, 58]. Pre-trained on hundreds of millions [40]
or billions [45] of image-and-text pairs collected from the
*This work is supported by the Defense Advanced Research Projects
Agency (DARPA) Contract No. FA8750-19-C-1001. Any opinions, find-
ings and conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views of DARPA.
†Equal contribution
1Our code is publicly available at https://github.com/
jamessealesmith/ConStruct-VL
Understanding Spatial relations
Understanding Colors
Understanding Action relations
Understanding Object states
Reminding using
Adversarial Pseudo -
Replay (APR)
Efficient access to past
models with Layered
LoRA (LaLo )architectureContinual Learning of
Structured V&L Concepts
…Figure 1. Illustration for the Continual Data-Free Structured
VL Concepts Learning (ConStruct-VL). Structured VL Concept
(SVLC) understanding skills are added / refined over time, with
Adversarial Pseudo-Replay (APR) effectively countering catas-
trophic forgetting using our Layered-LoRA (LaLo) architecture’s
ability of efficient no-memory-cost access to past task models.
web, these VL models have demonstrated remarkable ca-
pabilities in understanding (recognizing [16, 40], detecting
[63], segmenting [57], etc.) objects appearing in images or
videos [39], thus moving beyond the previous paradigm of
fixed object classes to open-vocabulary models.
Despite these great advances, several recent works [52,
67] have found VL models to be brittle with respect to un-
derstanding Structured VL Concepts (SVLCs) - non-object
textual content such as object attributes, states, inter-object
relations (e.g. spatial, interaction, etc.), and more. Nat-
urally, it is important to alleviate these shortcomings, as
this lack of understanding of SVLCs can lead to embarrass-
ing errors on the VL model’s part for many applications,
such as analysis of multi-modal social networks data, multi-
modal chat, multi-modal document analysis, and more. Im-
portantly, in most of these applications: (i) different errors
made by the model surface sequentially in the course of
time, each time identifying another SVLC ‘skill’ missing
in the model; (ii) typically, to fix the errors, one would at-
tempt additional fine-tuning of the model on a data collected
from the source on which the errors were observed (hoping
to avoid catastrophic forgetting of the previous model im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14994
A handsome
dog ona couch𝑃𝑀2= 0.99𝑃𝑀1= 0.9Adv. loss -0.9∇𝑇𝑒𝑥𝑡
A pink dog ona couch
Adv. 𝑃𝑀2= 0.8 Distill from 𝑀2prediction
1
2 Task 2 CE loss3 4
5
text + image inputsadversarial text optimized for
𝑛𝑎𝑑𝑣steps (handsome -> pink)initialize
5 𝐿𝑜𝑠𝑠=𝑃𝑀1𝐼→𝑇−𝑃𝑀1𝐼→𝑇𝑎𝑑𝑣−𝑃𝑀2𝐼→𝑇−𝑃𝑀2𝐼→𝑇𝑎𝑑𝑣
𝐼= original image 𝑀1= positive prediction probability of a model * (after SoftMax)
𝑇=originaltext(usingonlypositivematches) 𝑇𝑎𝑑𝑣 = adversarial text generated from 𝑇using(color)
M1
(spatial
relation)
M2W: arbitrary weight matrix
(conv/linear/embedding)
Figure 2. Layered-LoRA (LaLo) architecture and Adversarial Pseudo-Replay (APR), illustrated for a two-task ConStruct-VL sequence
teaching ‘color’→‘spatial relation’ understanding skills. (1) A text+image pair is entering the model M2currently training to understand
spatial relations; M2is an extension of the model M1(previously trained to understand color) via adding a layer of low-rank (LoRA)
adapters to every parametric function of M1while keeping M1frozen. (2) M2produces positive prediction probability PM2(after
softmax) and applies CE loss. (3) Text+image pair are passed through M1(done without reloading in LaLo). (4) M1produces a positive
prediction probability PM1; using −PM1as adversarial loss, we perform a sign-gradient attack on the text input (after tokenizer+first
embedding layer). The produced text is expected to be adversarial in the direction of the SVLC used to train M1, color in this case. We,
therefore, expect some embedded text tokens to become ‘colored’ inconsistently with the image. (5) We use the produced adversarial
sample to distill from M1intoM2via the proposed pseudo-replay positive prediction probability drop preserving loss.
provement rounds); (iii) data sources where errors are ob-
served are typically private and cannot be passed between
these fine-tuning tasks. We, therefore, ask the question:
Can we sequentially add SVLC skills to multi-modal VL
models, in a privacy-preserving manner? This leads to the
data-free continual learning problem (e.g., [55, 56]) cast in
the multi-modal VL domain.
To this end, we introduce the first Continual Data-
Free Structured VL Concepts Learning (ConStruct-VL)
multi-modal benchmark , built on top of the popular Visual
Genome [21] and Visual Attributes in the Wild [37] datasets
using the protocol proposed in VL-Checklist [67], and show
it is challenging for many existing data-free CL strate-
gies, including recent SOTA prompting-based CL meth-
ods [55, 56]. We then offer a novel data-free CL method,
leveraging the multi-modal nature of the problem, to effec-
tively avoid forgetting.We propose the concept of Adversar-
ial Pseudo-Replay (APR), that (as opposed to the previous
pseudo-replay works [5, 46, 60]) generates negative exam-
ples to past task models conditioned on the current batch
data. For continual VL training we generate negatives in
one of the modalities by making it inconsistent with the
other modality via a series of adversarial attack steps utiliz-
ing past models (see Fig. 2 for an example). Intuitively, gen-
erating (inconsistent) negatives (APR) is easier than gener-
ating (consistent) positives (pseudo-replay). Also, past task
model attacks are likely to generate inconsistencies corre-
sponding to their tasks, thus leading to reduced forgetting
when we use the generated negative samples to distill from
the past task models the drop in prediction probabilities af-
ter adversarial examples are applied (Fig. 1). To use the
proposed APR technique we need the ability to efficiently
invoke past models at training time. We, therefore, pro-pose a Layered-LoRA (LaLo) continual learning neural ar-
chitecture utilizing layered parameter-efficient (low-rank)
residual adapters, supporting invocation of any of the past
task models on any given training batch at no additional
memory cost (without the need to reload these past mod-
els into memory). Moreover, our proposed architecture can
be collapsed to the original model size by collapsing all the
adapters into their adapted parametric functions, thus sup-
porting inference on the final model at no additional cost.
Contributions: (i) we propose the challenging ConStruct-
VL benchmark, and show that existing data-free CL meth-
ods struggle in this setting; (ii) we propose a new concept
of Adversarial Pseudo-Replay (APR) specifically designed
for multi-modal continual learning, alongside a Layered-
LoRA (LaLo) architecture allowing invoking any of the
past task models efficiently without reloading and having
no additional inference time or parameters cost; and (iii) we
demonstrate significant improvements (over 6.8%increase
in final accuracy and ×5smaller average forgetting) of the
proposed approach compared to all the popular data-free CL
baselines, as well as some amounts of experience replay.
|
Sinha_SparsePose_Sparse-View_Camera_Pose_Regression_and_Refinement_CVPR_2023 | Abstract
Camera pose estimation is a key step in standard 3D re-
construction pipelines that operate on a dense set of im-
ages of a single object or scene. However, methods for
pose estimation often fail when only a few images are avail-
able because they rely on the ability to robustly identify and
match visual features between image pairs. While these
methods can work robustly with dense camera views, cap-
turing a large set of images can be time-consuming or im-
practical. We propose SparsePose for recovering accu-
rate camera poses given a sparse set of wide-baseline im-
ages (fewer than 10). The method learns to regress initial
camera poses and then iteratively refine them after train-
ing on a large-scale dataset of objects (Co3D: Common
Objects in 3D). SparsePose significantly outperforms con-
ventional and learning-based baselines in recovering ac-
curate camera rotations and translations. We also demon-
strate our pipeline for high-fidelity 3D reconstruction using
only 5-9 images of an object. Project webpage: https:
//sparsepose.github.io/
| 1. Introduction
Computer vision has recently seen significant advances
in photorealistic new-view synthesis of individual ob-
jects [24, 41, 52, 60] or entire scenes [5, 62, 81]. Some of
these multiview methods take tens to hundreds of images
as input [5, 35, 36, 41], while others estimate geometry and
appearance from a few sparse camera views [43, 52, 76].
To produce high-quality reconstructions, these methods cur-
rently require accurate estimates of the camera position and
orientation for each captured image.
Recovering accurate camera poses, especially from a
limited number of images is an important problem for prac-
tically deploying 3D reconstruction algorithms, since it can
be challenging and expensive to capture a dense set of im-
ages of a given object or scene. While some recent meth-
ods for appearance and geometry reconstruction jointly
tackle the problem of camera pose estimation, they typi-
cally require dense input imagery and approximate initial-
SparseInputViews InitialPoseRegression
NovelView Synthesis IterativePoseRefinement
projected image
features
02510GT iter:Figure 1. SparsePose – Given sparse input views, our method
predicts initial camera poses and then refines the poses based
on learned image features aggregated using projective geometry.
SparsePose outperforms conventional methods for camera pose
estimation based on feature matching within a single scene and
enables high-fidelity novel view synthesis (shown for each itera-
tion of pose refinement) from as few as five input views.
ization [34, 67] or specialized capture setups such as imag-
ing rigs [27, 78]. Most conventional pose estimation algo-
rithms learn the 3D structure of the scene by matching im-
age features between pairs of images [46, 58], but they typ-
ically fail when only a few wide-baseline images are avail-
able. The main reason for this is that features cannot be
matched robustly resulting in failure of the entire recon-
struction process.
In such settings, it may be outright impossible to find
correspondences between image features. Thus, reliable
camera pose estimation requires learning a prior over the
geometry of objects. Based on this insight, the recent Rel-
Pose method [79] proposed a probabilistic energy-based
model that learns a prior over a large-scale object-centric
dataset [52]. RelPose is limited to predicting camera rota-
tions (i.e., translations are not predicted). Moreover, it op-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21349
erates directly on image features without leveraging explicit
3D reasoning.
To alleviate these limitations, we propose SparsePose , a
method that predicts camera rotation and translation param-
eters from a few sparse input views based on 3D consistency
between projected image features (see Figure 1). We train
the model to learn a prior over the geometry of common
objects [52], such that after training, we can estimate the
camera poses for sparse images and generalize to unseen
object categories. More specifically, our method performs
a two-step coarse-to-fine image registration: (1) we predict
coarse approximate camera locations for each view of the
scene, and (2) these initial camera poses are used in a pose
refinement procedure that is both iterative and autoregres-
sive, which allows learning fine-grained camera poses. We
evaluate the utility of the proposed method by demonstrat-
ing its impact on sparse-view 3D reconstruction.
Our method outperforms other methods for camera pose
estimation in sparse view settings. This includes conven-
tional image registration pipelines, such as COLMAP [58],
as well as recent learning-based methods, such as Rel-
Pose [79]. Overall, SparsePose enables real-life, sparse-
view reconstruction with as few as five images of common
household objects, and is able to predict accurate camera
poses, with only 3 source images of previously unseen ob-
jects.
In summary, we make the following contributions.
• We propose SparsePose, a method that predicts camera
poses from a sparse set of input images.
• We demonstrate that the method outperforms other tech-
niques for camera pose estimation in sparse settings.
• We evaluate our approach on 3D reconstruction from
sparse input images via an off-the-shelf method, where
our camera estimation enables much higher-fidelity re-
constructions than competing methods.
|
Tian_ResFormer_Scaling_ViTs_With_Multi-Resolution_Training_CVPR_2023 | Abstract
Vision Transformers (ViTs) have achieved overwhelming
success, yet they suffer from vulnerable resolution scalabil-
ity,i.e., the performance drops drastically when presented
with input resolutions that are unseen during training. We
introduce, ResFormer, a framework that is built upon the
seminal idea of multi-resolution training for improved per-
formance on a wide spectrum of, mostly unseen, testing res-
olutions. In particular, ResFormer operates on replicated
images of different resolutions and enforces a scale con-
sistency loss to engage interactive information across dif-
ferent scales. More importantly, to alternate among vary-
ing resolutions effectively, especially novel ones in testing,
we propose a global-local positional embedding strategy
that changes smoothly conditioned on input sizes. We con-
duct extensive experiments for image classification on Im-
ageNet. The results provide strong quantitative evidence
that ResFormer has promising scaling abilities towards a
wide range of resolutions. For instance, ResFormer-B-MR
achieves a Top-1 accuracy of 75.86% and 81.72% when
evaluated on relatively low and high resolutions respec-
tively ( i.e., 96 and 640), which are 48% and 7.49% better
than DeiT-B. We also demonstrate, moreover, ResFormer is
flexible and can be easily extended to semantic segmenta-
tion, object detection and video action recognition.
| 1. Introduction
The strong track record of Transformers in a multi-
tude of Natural Language Processing [53] tasks has moti-
vated an extensive exploration of Transformers in the com-
puter vision community. At its core, Vision Transformers
(ViTs) build upon the multi-head self-attention mechanisms
for feature learning through partitioning input images into
patches of identical sizes and processing them as sequences
for dependency modeling. Owing to their strong capabil-
†Corresponding author.
Note that we use resolution, scale and size interchangeably.
96128 224 288 384 512
Testing Resolution7577798183Top-1 Accuracy (%)
DeiT-S
DeiT-BResformer-S-MR
Resformer-B-MRTraining Res Acc
Figure 1. Comparisons between ResFormer and vanilla ViTs. Res-
Former achieves promising results on a wide range of resolutions.
ities in capturing relationships among patches, ViTs and
their variants demonstrate prominent results in versatile vi-
sual tasks, e.g., image classification [36, 50, 65, 70], object
detection [4, 30, 55], vision-language modeling [25, 40, 54]
and video recognition [3, 29, 37, 64].
While ViTs have been shown effective, it remains un-
clear how to scale ViTs to deal with inputs with varying
sizes for different applications. For instance, in image clas-
sification, the de facto training resolution of 224 is com-
monly adopted [36, 50, 51, 65]. However, among works in
pursuit of reducing the computational cost of ViTs [39, 43],
shrinking the spatial dimension of inputs is a popular strat-
egy [6, 32, 56]. On the other hand, fine-tuning with higher
resolutions ( e.g., 384) is widely used [15, 36, 48, 51, 59, 62]
to produce better results. Similarly, dense prediction tasks
such as semantic segmentation and object detection also re-
quire relatively high resolution inputs [1, 30, 35, 55].
Despite of the necessity for both low and high resolu-
tions, limited effort has been made to equip ViTs with the
ability to handle different input resolutions. Given a novel
resolution that is different from that used during training, a
common practice adopted for inference is to keep the patch
size fixed and then perform bicubic interpolation on posi-
tional embeddings directly to the corresponding scale. As
shown in Sec. 3, while such a strategy is able to scale ViTs
to relatively larger input sizes, the results on low resolutions
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22721
plunge sharply. In addition, significant changes between
training and testing scales also lead to limited results ( e.g.,
DeiT-S trained on a resolution of 224 degrades by 1.73%
and 7.2% when tested on 384 and 512 respectively).
Multi-resolution training, which randomly resizes im-
ages to different resolutions, is a promising way to accom-
modate varying resolutions at test time. While it has been
widely used by CNNs for segmentation [22], detection [24]
and action recognition [58], generalizing such an idea to
ViTs is challenging and less explored. For CNNs, thanks
to the stacked convolution design, all input images, regard-
less of their resolutions, share the same set of parameters in
multi-resolution training. For ViTs, although it is feasible
to share parameters for all samples, bicubic interpolations
of positional embeddings, which are not scale-friendly, are
still needed when iterating over images of different sizes.
In this paper, we posit that positional embeddings of
ViTs should be adjusted smoothly across different scales for
multi-resolution training. The resulting model then has the
potential to scale to different resolutions during inference.
Furthermore, as images in different scales contain objects
of different sizes, we propose to explore useful information
across different resolutions for improved performance in a
similar spirit to feature pyramids, which are widely used
in hierarchical backbone designs for both image classifica-
tion [24, 36] and dense prediction tasks [22, 23, 33].
To this end, we introduce ResFormer, which which takes
in inputs as multi-resolution images during training and ex-
plores multi-scale clues for better results. Trained in a sin-
gle run, ResFormer is expected to generalize to a large span
of testing resolutions. In particular, given an image dur-
ing training, ResFormer resize it to different scales, and
then use all scales in the same feed-forward process. To
encourage information interaction among different resolu-
tions, we introduce a scale consistency loss, which bridges
the gap between low-resolution and high-resolution features
by self-knowledge distillation. More importantly, to facili-
tate multi-resolution training, we propose a global-local po-
sitional embedding strategy, which enforces parameter shar-
ing and changes smoothly across different resolutions with
the help of convolutions. Given a novel resolution at testing,
ResFormer dynamically generates a new set of positional
embeddings and performs inference.
To validate the efficacy of ResFormer, we conduct com-
prehensive experiments on ImageNet-1K [13]. We observe
that ResFormer makes remarkable gains compared with
vanilla ViTs which are trained on single resolution. Given
the testing resolution of 224, ResFormer-S-MR trained on
resolutions of 128, 160 and 224 achieves a Top-1 accuracy
of 82.16%, outperforming the 224-trained DeiT-S [50] by
2.24% . More importantly, as illustrated in Fig. 1, Res-
Former surpasses DeiT by a large margin on unseen res-
olutions, e.g., ResFormer-S-MR outperforms DeiT-S by6.67% and 56.04% when tested on 448 and 80 respec-
tively. Furthermore, we also validate the scalability of Res-
Former on dense prediction tasks, e.g., ResFormer-B-MR
achieves 48.30 mIoU on ADE20K [72] and 47.6 APboxon
COCO [34]. We also show that ResFormer can be readily
adapted for video action recognition with different sizes of
inputs via building upon TimeSFormer [3].
|
Sun_DeFeeNet_Consecutive_3D_Human_Motion_Prediction_With_Deviation_Feedback_CVPR_2023 | Abstract
Let us rethink the real-world scenarios that require hu-
man motion prediction techniques, such as human-robot
collaboration. Current works simplify the task of predicting
human motions into a one-off process of forecasting a short
future sequence (usually no longer than 1 second) based
on a historical observed one. However, such simplification
may fail to meet practical needs due to the neglect of the fact
that motion prediction in real applications is not an isolated
“observe then predict” unit, but a consecutive process com-
posed of many rounds of such unit, semi-overlapped along
the entire sequence. As time goes on, the predicted part
of previous round has its corresponding ground truth ob-
servable in the new round, but their deviation in-between
is neither exploited nor able to be captured by existing iso-
lated learning fashion. In this paper, we propose DeFeeNet,
a simple yet effective network that can be added on exist-
ing one-off prediction models to realize deviation percep-
tion and feedback when applied to consecutive motion pre-
diction task. At each prediction round, the deviation gen-
erated by previous unit is first encoded by our DeFeeNet,
and then incorporated into the existing predictor to enable a
deviation-aware prediction manner, which, for the first time,allows for information transmit across adjacent prediction
units. We design two versions of DeFeeNet as MLP-based
and GRU-based, respectively. On Human3.6M and more
complicated BABEL, experimental results indicate that our
proposed network improves consecutive human motion pre-
diction performance regardless of the basic model.
| 1. Introduction
In the age of intelligence, humans have to share a com-
mon space with robots, machines or autonomous systems
for a sustained period of time, such as in human-motion col-
laboration [33], motion tracking [10] and autonomous driv-
ing [38] scenarios. To satisfy human needs while keeping
their safety, deterministic human motion prediction, which
is aimed at forecasting future human pose sequence given
historical observations, has become a research hotspot and
already formed a relatively complete implement procedure.
As human poses may become less predictable over time,
current works abstract the practical needs into a simplified
task of learning to “observe a few frames and then predict
the following ones”, with the prediction length mostly set
to1 second [4, 7–9, 16, 20, 21, 24, 27, 28, 34, 40].
Essentially, such simplified prediction task can be re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5527
garded as a one-off acquired target limited within the short,
isolated “observe then predict” unit. Nevertheless, this unit
is actually not applicable in reality that requires consecu-
tive observation and prediction on humans during the long
period of human-robot/machine coexistence. Though, intu-
itively, sliding such unit round by round along the time may
roughly satisfy the need for consecutive prediction, one ne-
glected fact is that each round of prediction unit is arranged
in asemi-overlapped structure (see Figure 1). As time goes
on, what was predicted before has its corresponding ground
truth observable now, but their deviation in-between (i.e.,
mistake) remains unexplored.
Our key insight lies in that robots/machines should be
able to detect and learn from the mistakes they have made.
Due to the inherent continuity and consistency of human
motion data, such information would be very powerful to
improve future prediction accuracy. The semi-overlapped,
multi-round unit structure offers a natural way to transmit
deviation feedback across adjacent prediction units, which,
however, are unable to be realized by current one-off unit
prediction strategy.
Based on this situation, in this paper, we propose De-
FeeNet, a simple yet effective network which can be added
on existing one-off human motion prediction models to im-
plement Deviation perception and Feedback when applied
to consecutive prediction. By mining the element “devia-
tion” that neglected in previous works, we introduce induc-
tive bias, where DeFeeNet can learn to derive certain “pat-
terns” from past deviations and then constrain the model to
make better predictions. To be specific, our DeFeeNet is
constructed in two versions: MLP-based version containing
one temporal-mixing MLP and one spatial-mixing MLP;
GRU-based version containing one-layer GRU with fully-
connected layers only for dimension alignment. At each
prediction round, DeFeeNet serves as an additional branch
inserted into the existing predictor, which encodes the de-
viation of the previous unit into latent representation and
then transmits it into the main pipeline. In summary, our
contribution are as follows:
• We mine the element “deviation” that neglected in ex-
isting human motion prediction works, extending cur-
rent within-unit research horizon to cross-unit, which,
for the first time, allows for information transmit
across adjacent units.
• We propose DeFeeNet, a simple yet effective network
that can be added on existing motion prediction models
to enable consecutive prediction (i) with flexible round
number and (ii) in a deviation-aware manner. It can
learn to derive certain patterns from past mistakes and
constrain the model to make better predictions.
• Our DeFeeNet is agnostic to its basic models, and ca-
pable of yielding stably improved performance on Hu-man3.6M [15], as well as a more recent and challeng-
ing dataset BABEL [32] that contains samples with
different actions and their transitions.
|
Tanida_Interactive_and_Explainable_Region-Guided_Radiology_Report_Generation_CVPR_2023 | Abstract
The automatic generation of radiology reports has the
potential to assist radiologists in the time-consuming task
of report writing. Existing methods generate the full re-
port from image-level features, failing to explicitly focus
on anatomical regions in the image. We propose a sim-
ple yet effective region-guided report generation model
that detects anatomical regions and then describes indi-
vidual, salient regions to form the final report. While
previous methods generate reports without the possibility
of human intervention and with limited explainability, our
method opens up novel clinical use cases through addi-
tional interactive capabilities and introduces a high de-
gree of transparency and explainability. Comprehensive ex-
periments demonstrate our method’s effectiveness in report
generation, outperforming previous state-of-the-art models,
and highlight its interactive capabilities. The code and
checkpoints are available at https://github.com/
ttanida/rgrg .
| 1. Introduction
Chest radiography (chest X-ray) is the most common
type of medical image examination in the world and is criti-
cal for identifying common thoracic diseases such as pneu-
monia and lung cancer [19, 37]. Given a chest X-ray, radi-
ologists examine each depicted anatomical region and de-
scribe findings of both normal and abnormal salient regions
in a textual report [12]. Given the large volume of chest X-
rays to be examined in daily clinical practice, this often be-
comes a time-consuming and difficult task, which is further
exacerbated by a shortage of trained radiologists in many
healthcare systems [3, 40, 41]. As a result, automatic radi-
ology report generation has emerged as an active research
area with the potential to alleviate radiologists’ workload.
Generating radiology reports is a difficult task since re-
ports consist of multiple sentences, each describing a spe-
cific medical observation of a specific anatomical region.
∗Equal contribution
Region-guided Report Generation ModelGenerated Report The right lung is clear. Mild to moderate left pleural effusion as well as adjacent left basal atelectasis. Mild to moderate cardiomegaly. There is thoracolumbar dextroscoliosis.
Figure 1. Our approach at a glance. Unique anatomical regions
of the chest are detected, the most salient regions are selected for
the report and individual sentences are generated for each region.
Consequently, each sentence in the generated report is explicitly
grounded on an anatomical region.
As such, current radiology report generation methods tend
to generate reports that are factually incomplete ( i.e., miss-
ing key observations in the image) and inconsistent ( i.e.,
containing factually wrong information) [28]. This is fur-
ther exacerbated by current methods utilizing image-level
visual features to generate reports, failing to explicitly fo-
cus on salient anatomical regions in the image. Another is-
sue regarding existing methods is the lack of explainability.
A highly accurate yet opaque report generation system may
not achieve adoption in the safety-critical medical domain if
the rationale behind a generated report is not transparent and
explainable [11, 14, 27]. Lastly, current methods lack inter-
activity and adaptability to radiologists’ preferences. E.g.,
a radiologist may want a model to focus exclusively on spe-
cific anatomical regions within an image.
Inspired by radiologists’ working patterns, we propose
a simple yet effective Region- Guided Radiology Report
Generation (RGRG) method to address the challenges high-
lighted before. Instead of relying on image-level visual
features, our work is the first in using object detection
to directly extract localized visual features of anatomical
regions, which are used to generate individual, anatomy-
specific sentences describing any pathologies to form the
final report. Conceptually, we divide and conquer the dif-
ficult task of generating a complete and consistent report
from the whole image into a series of simple tasks of gen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7433
erating short, consistent sentences for a range of isolated
anatomical regions, thereby achieving both completeness
and consistency for the final, composed report.
While existing models [7, 17, 26] can produce heatmaps
to illustrate which parts of an image were attended dur-
ing report generation, our model visually grounds each sen-
tence in the generated report on a predicted bounding box
around an anatomical region (see Fig. 1), thereby introduc-
ing a high degree of explainability to the report generation
process. The visual grounding allows radiologists to eas-
ily verify the correctness of each generated sentence, which
may increase trust in the model’s predictions [11, 27, 39].
Moreover, our approach enables interactive use, allowing
radiologists to select anatomical structures or draw bound-
ing boxes around regions of interest for targeted description
generation, enhancing flexibility in the clinical workflow.
Our contributions are as follows:
•We introduce a simple yet effective Region- Guided
Radiology Report Generation (RGRG) method that
detects anatomical regions and generates individual
descriptions for each. Empirical evidence demon-
strates that our model produces relevant sentences per-
taining to the anatomy. To the best of our knowledge,
it is the first report generation model to visually ground
sentences to anatomical structures.
•We condition a pre-trained language model on each
anatomical region independently. This enables
anatomy-based sentence generation , where radiolo-
gists interactively select individual, detected anatomi-
cal regions for which descriptions are generated.
•Additionally, our approach enables radiologists to
manually define regions of interest using bounding
boxes, generating corresponding descriptions. We
assess the impact of these manually drawn boxes,
demonstrating the robustness of the selection-based
sentence generation task.
•Forfull radiology report generation , a module is in-
troduced to select salient anatomical regions, whose
generated descriptions are concatenated to form factu-
ally complete and consistent reports. We empirically
demonstrate this on the MIMIC-CXR dataset [18, 19],
where we outperform competitive baselines in both
language generation and clinically relevant metrics.
|
Sun_Decoupling_Learning_and_Remembering_A_Bilevel_Memory_Framework_With_Knowledge_CVPR_2023 | Abstract
The dilemma between plasticity and stability arises as a
common challenge for incremental learning. In contrast, the
human memory system is able to remedy this dilemma owing
to its multi-level memory structure, which motivates us to pro-
pose a BilevelMemory system with Knowledge Projection
(BMKP) for incremental learning. BMKP decouples the
functions of learning and remembering via a bilevel-memory
design: a working memory responsible for adaptively model
learning, to ensure plasticity; a long-term memory in charge
of enduringly storing the knowledge incorporated within the
learned model, to guarantee stability. However, an emerging
issue is how to extract the learned knowledge from the work-
ing memory and assimilate it into the long-term memory. To
approach this issue, we reveal that the parameters learned
by the working memory are actually residing in a redundant
high-dimensional space, and the knowledge incorporated in
the model can have a quite compact representation under
a group of pattern basis shared by all incremental learning
tasks. Therefore, we propose a knowledge projection pro-
cess to adaptively maintain the shared basis, with which the
loosely organized model knowledge of working memory is
projected into the compact representation to be remembered
in the long-term memory. We evaluate BMKP on CIFAR-10,
CIFAR-100, and Tiny-ImageNet. The experimental results
show that BMKP achieves state-of-the-art performance with
lower memory usage1.
| 1. Introduction
In an ever-changing environment, intelligent systems are
expected to learn new knowledge incrementally without for-
getting, which is referred to as incremental learning (IL)
*Corresponding author: Yangli-ao Geng ([email protected]).
1The code is available at https://github.com/SunWenJu123/BMKP[8,18]. Based on different design principles, many incremen-
tal learning methods have been proposed [16,19,24]. Among
them, memory-based models are leading the way in perfor-
mance and have attracted enormous attention [3, 17, 26, 33].
The core idea of memory-based methods is to utilize
partial old-task information to guide the model to learn with-
out forgetting [20]. As illustrated in Figure 1 (a), memory-
based methods typically maintain a memory to store the old-
task information. According to the type of stored informa-
tion, memory-based methods further fall into two categories:
rehearsal-based and gradient-memory-based. Rehearsal-
based methods [3, 17, 28, 29, 39] keep an exemplar memory
(or generative model) to save (or generate) old-task samples
or features, and replay them to recall old-task knowledge
when learning new tasks. However, the parameter fitting of
new tasks has the potential to overwrite the old-task knowl-
edge, especially when the stored samples are unable to ac-
curately simulate old-task data distributions, leading to low
stability. Gradient-memory-based methods [26, 33] maintain
a memory to store the gradient directions that may interfere
with the performance of old tasks, and only update the learn-
ing model with the gradients that are orthogonal to the stored
ones. Although the gradient directions restriction guarantees
stability, this restriction may prevent the model from being
optimized toward the right direction for a new task, which
would result in low plasticity. Therefore, both of these two
types of methods suffer the low plasticity or stability. The
reason is that their model is in charge of both learning new
task knowledge and maintaining old task knowledge. The
limited model capacity will inevitably lead to a plasticity-
stability trade-off in the face of a steady stream of knowledge,
i.e., plasticity-stability dilemma [21].
In contrast, human brains are known for both high plas-
ticity and stability for incremental learning, owing to its
multi-level memory system [7]. Figure 1 (b) illustrates how
a human brain works in the classic Aktinson-Shiffrin human
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20186
Pattern BasisA1 A2 A tKonwledge
Repr esentationsWorking Memory
Long-term MemoryEnvir onment Input
Learn Infer ence
Project Project
Envir onment Input
Sensory Memory
Short-term Memory
Long-term
MemoryDecision-
making
RetrievalRehearsal
(b) Atkinson–Shiffrin
Human Memory Model(c) Bilevel Memory Model
with Knowledge Pr ojection
Envir onment Input
Infer ence
(a) Memory-based
Incremental Learning ModelEpisodic Memory
or
Projection MemoryModel
SampleLearn
Rehearsal
or
Project Figure 1. Diagram of memory-based incremental learning method (a), Atkinson-Shiffrin human memory model [30] (b), and the architecture
of the proposed BMKP (c).
memory model [30]. Inspired by the mechanism of human
memory, this paper proposes a Bilevel Memory model with
Knowledge Projection (BMKP) for incremental learning. As
illustrated in Figure 1 (c), BMKP adopts a bilevel-memory
design, including a working memory (corresponds to the
short-term memory of the human brain) and a long-term
memory. The working memory is implemented as a neural
network responsible for adaptively learning new knowledge
and inference. The long-term memory is in charge of steadily
storing all the learned knowledge. Similar to the human
memory, this bilevel-memory structure endows BMKP with
both high plasticity and stability by decoupling the functions
of learning and remembering.
An emerging issue for this bilevel memory framework
is how to extract the learned knowledge from the working
memory and assimilate it into the long-term memory. In the
working memory, the knowledge is represented as the trained
parameters in a high-dimensional space, which we call Pa-
rameter Knowledge Space (PKS) . However, this space is
usually overparameterized [6], implying that the knowledge
representation in PKS is loosely organized. Therefore, in-
stead of directly storing the learned parameters, we propose
to recognize the underlying common patterns, and further
utilize these patterns as the basis to represent the parameters.
Specifically, we define the space spanned by these pattern
basis as the Core Knowledge Space (CKS) , in which the
knowledge can be organized in a quite compact form with-
out loss of performance. Based on these two knowledge
spaces, we propose a knowledge projection process to adap-
tively maintain a group of CKS pattern basis shared by all
incremental learning tasks, with which the loosely organized
model knowledge in PKS can be projected into CKS to ob-
tain the compact knowledge representation. The compact
representation, instead of the raw model knowledge, is trans-ferred to the long-term memory for storing.
The contributions of this work are summarized as follows:
•Inspired by the multi-level human memory system, we
propose a bilevel-memory framework for incremental
learning, which benefits from both high plasticity and
stability.
•We propose a knowledge projection process to project
knowledge from PKS into compact representation in
CKS, which not only improves memory utilization effi-
ciency but also enables forward knowledge transfer for
incremental learning.
•A representation compaction regularizer (Eq. (4)) is
designed to encourage the working memory to reuse
previously learned knowledge, which enhances both
the memory efficiency and the performance of BMKP.
•We evaluate BMKP on CIFAR-10, CIFAR-100, and
Tiny-ImageNet. The experimental results show that
BMKP outperforms most of state-of-the-art baselines
with lower memory usage.
|
Tang_What_Happened_3_Seconds_Ago_Inferring_the_Past_With_Thermal_CVPR_2023 | Abstract
Inferring past human motion from RGB images is chal-
lenging due to the inherent uncertainty of the prediction
problem. Thermal images, on the other hand, encode
traces of past human-object interactions left in the envi-
ronment via thermal radiation measurement. Based on
this observation, we collect the first RGB-Thermal dataset
for human motion analysis, dubbed Thermal-IM. Then
we develop a three-stage neural network model for accu-
rate past human pose estimation. Comprehensive experi-
ments show that thermal cues significantly reduce the am-
biguities of this task, and the proposed model achieves
remarkable performance. The dataset is available at
https://github.com/ZitianTang/Thermal-IM.
| 1. Introduction
Imagine we have a robot assistant at home. When it
comes to offering help, it may wonder what we did in the
past. For instance, it wonders which cups were used, then
cleans them. Or it can better predict our future actions once
the past is known. But how can it know this? Consider
the images in Fig. 1. Can you tell what happened 3 sec-
onds ago? An image contains a wealth of information. The
robot may extract geometric and semantic cues, infer the
affordance of the scene, and imagine how humans would
interact and fit in the environment. Therefore, in the left
image, it can confidently deduce that the person was sitting
on the couch; however, it is not sure where. Similarly, it can
imagine many possibilities in the right image but cannot be
Corresponding to: [email protected]. Indeed, given a single RGB image, the problem is
inherently ill-posed.
In this paper, we investigate the use of a novel sensor
modality, thermal data, for past human behavior analysis.
Thermal images are typically captured by infrared cameras,
with their pixel values representing the temperature at the
corresponding locations in the scene. As heat transfer oc-
curs whenever there is contact or interaction between hu-
man bodies and their environment, thermal images serve as
strong indicators of where and what has happened. Con-
sider the thermal images in Fig. 2. With thermal images, we
can instantly determine where the person was sitting. This
is because the objects they contacted were heated, leaving
behind bright marks. If a robot assistant is equipped with
a thermal camera, it can more effectively infer the past and
provide better assistance. Otherwise, we may need a camera
installed in every room and keep them operational through-
out the day.
With these motivations in mind, we propose to develop a
system that, given an indoor thermal image with a human in
it, generates several possible poses of the person N(N= 3)
seconds ago. To achieve this goal, we first collect a Thermal
Indoor Motion dataset (Thermal-IM) composed of RGB,
thermal, and depth videos of indoor human motion with es-
timated human poses. In each video, the actor performs
various indoor movements ( e.g., walking, sitting, kneeling)
and interacts with different objects ( e.g., couch, chair, cabi-
net, table) in a room. Then we design a novel, interpretable
model for past human pose estimation. The model consists
of three stages: the first stage proposes where the human
might have been 3 seconds ago, leveraging the most dis-
cernible information in thermal images. The second stage
infers what action the human was performing. Finally, the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17111
Answer Now Answer Now
The person
was certainly
there!Figure 2. Thermal images to the rescue: Thermal images encode traces of past human-object interactions, which can help us infer past
human behavior and understand objects’ affordance. In this work, we focus on estimating human body poses a few seconds ago.
third stage synthesizes an exact pose.
Experiments show that our method managed to gener-
ate plausible past poses based on the locations and shapes
of thermal cues. These results are more accurate than the
RGB-only counterparts, thanks to the reduced uncertainty
of past human movements. Furthermore, our model auto-
matically and implicitly discovers the correlation between
thermal mark intensity and time.
The contributions of this work are the following:
• We make the first attempt at a novel past human motion
estimation task by exploiting thermal footprints.
• We construct the Thermal-IM dataset, which contains
synchronized RGB-Thermal and RGB-Depth videos
of indoor human motion.
• We propose an effective three-stage model to infer past
human motion from thermal images.
|
Sun_BKinD-3D_Self-Supervised_3D_Keypoint_Discovery_From_Multi-View_Videos_CVPR_2023 | Abstract
Quantifying motion in 3D is important for studying the
behavior of humans and other animals, but manual pose an-
notations are expensive and time-consuming to obtain. Self-
supervised keypoint discovery is a promising strategy for
estimating 3D poses without annotations. However, current
keypoint discovery approaches commonly process single 2D
views and do not operate in the 3D space. We propose a
new method to perform self-supervised keypoint discovery
in 3D from multi-view videos of behaving agents, without
any keypoint or bounding box supervision in 2D or 3D. Our
method, BKinD-3D, uses an encoder-decoder architecture
with a 3D volumetric heatmap, trained to reconstruct spa-
tiotemporal differences across multiple views, in addition to
joint length constraints on a learned 3D skeleton of the sub-
ject. In this way, we discover keypoints without requiring
manual supervision in videos of humans and rats, demon-
strating the potential of 3D keypoint discovery for studying
behavior.
| 1. Introduction
All animals behave in 3D, and analyzing 3D posture and
movement is crucial for a variety of applications, includ-
ing the study of biomechanics, motor control, and behav-
ior [27]. However, annotations for supervised training of
3D pose estimators are expensive and time-consuming to
obtain, especially for studying diverse animal species and
varying experimental contexts. Self-supervised keypoint
discovery has demonstrated tremendous potential in discov-
ering 2D keypoints from video [19,20,40], without the need
for manual annotations. These models have not been well-
explored in 3D, which is more challenging compared to 2D
*Equal contribution
†Work done outside of SAIT
Self-Supervised 2D Discovery Self-Supervised 3D from 2D
Multi-view video
2D pose
(single or multi-view) 3D pose 2D pose Images or videos Previous Self-Supervised Keypoints
Self-Supervised 3D Keypoint Discovery (Ours)
3D pose
Figure 1. Self-supervised 3D keypoint discovery . Previous work
studying self-supervised keypoints either requires 2D supervision
for 3D pose estimation or focuses on 2D keypoint discovery. Cur-
rently, self-supervised 3D keypoint discovery is not well-explored.
We propose methods for discovering 3D keypoints directly from
multi-view videos of different organisms, such as human and rats,
without 2D or 3D supervision. The 3D keypoint discovery exam-
ples demonstrate the results from our method.
due to depth ambiguities, a larger search space, and the need
to incorporate geometric constraints. Our goal is to enable
3D keypoint discovery of humans and animals from syn-
chronized multi-view videos, without 2D or 3D supervision.
Self-Supervised 3D Keypoint Discovery. Previous
works for self-supervised 3D keypoints typically start from
a pre-trained 2D pose estimator [25,42], and thus do not per-
form keypoint discovery (Figure 1). These models are suit-
able for studying human poses because 2D human pose esti-
mators are widely available and the pose and body structure
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9001
of humans is well-defined. However, for many scientific
applications [27, 33, 40], it is important to track diverse or-
ganisms in different experimental contexts. These situations
require time-consuming 2D or 3D annotations for training
pose estimation models. The goal of our work is to en-
able 3D keypoint discovery from multi-view videos directly,
without any 2D or 3D supervision, in order to accelerate the
analysis of 3D poses from diverse animals in novel settings.
To the best of our knowledge, self-supervised 3D keypoint
discovery have not been well-explored for real-world multi-
view videos.
Behavioral Videos. We study 3D keypoint discovery
in the setting of behavioral videos with stationary cameras
and backgrounds. We chose this for several reasons. First,
this setting is common in many real-world behavior analy-
sis datasets [2,10,21,28,33,37,39], where there has been an
emerging trend to expand the study of behavior from 2D to
3D [27]. Thus, 3D keypoint discovery would directly ben-
efit many scientific studies in this space using approaches
such as biomechanics, motor control, and behavior [27].
Second, studying behavioral videos in 3D enables us to
leverage recent work in 2D keypoint discovery for behav-
ioral videos [40]. Finally, this setting enables us to tackle
the 3D keypoint discovery challenge in a modular way. For
example, in behavior analysis experiments, many tools are
already available for camera calibration [24], and we can
assume that camera parameters are known.
Our Approach. The key to our approach, which we
callBehavioral KeypointDiscovery in 3D(BKinD-3D),
is to encode self-supervised learning signals from videos
across multiple views into a single 3D geometric bottle-
neck. We leverage the spatiotemporal difference recon-
struction loss from [40] and use multi-view reconstruction
to train an encoder-decoder architecture. Our method does
not use any bounding boxes or keypoint annotations as su-
pervision. Critically, we impose links between our discov-
ered keypoints to discover connectivity across points. In
other words, keypoints on the same parts of the body are
connected, so that we are able to enforce joint length con-
straints in 3D. To show that our model is applicable across
multiple settings, we demonstrate our approach on multi-
view videos from different organisms. To summarize:
• We introduce self-supervised 3D keypoint discovery,
which discovers 3D pose from real-world multi-view
behavioral videos of different organisms, without any
2D or 3D supervision.
• We propose a novel method (BKinD-3D) for end-to-
end 3D discovery from video using multi-view spa-
tiotemporal difference reconstruction and 3D joint
length constraints.
• We demonstrate quantitatively that our work signifi-
cantly closes the gap between supervised 3D methodsMethod 3D sup. 2D sup. camera params data type
Isakov et al. [17]✓ ✓intrinsicsrealDANNCE [7] extrinsics
Rhodin et al. [35] ✓ optional intrinsics real
Anipose [24]× ✓intrinsicsrealDeepFly3D [12] extrinsics
EpipolarPose [25]× ✓ optional realCanonPose [43]
MetaPose [42] × ✓ × real
Keypoint3D [3] × ×intrinsicssimulationextrinsics
Ours (3D discovery) × ×intrinsicsrealextrinsics
Table 1. Comparison of our work with representative related
work for 3D pose using multi-view training . Previous works
require either 3D or 2D supervision, or simulated environments to
train jointly with reinforcement learning. Our method addresses a
gap in discovering 3D keypoints from real videos without 2D or
3D supervision.
and 3D keypoint discovery across different organisms
(humans and rats).
|
Tyagi_DeGPR_Deep_Guided_Posterior_Regularization_for_Multi-Class_Cell_Detection_and_CVPR_2023 | Abstract
Multi-class cell detection and counting is an essential
task for many pathological diagnoses. Manual counting is
tedious and often leads to inter-observer variations among
pathologists. While there exist multiple, general-purpose,
deep learning-based object detection and counting meth-
ods, they may not readily transfer to detecting and counting
cells in medical images, due to the limited data, presence of
tiny overlapping objects, multiple cell types, severe class-
imbalance, minute differences in size/shape of cells, etc.
In response, we propose guided posterior regularization
(DEGPR) , which assists an object detector by guiding it to
exploit discriminative features among cells. The features
may be pathologist-provided or inferred directly from vi-
sual data. We validate our model on two publicly avail-
able datasets (CoNSeP and MoNuSAC), and on MuCeD,
a novel dataset that we contribute. MuCeD consists of 55
biopsy images of the human duodenum for predicting celiac
disease. We perform extensive experimentation with three
object detection baselines on three datasets to show that
DEGPR is model-agnostic, and consistently improves base-
lines obtaining up to 9% (absolute) mAP gains.
| 1. Introduction
Multi-class multi-cell detection and counting (MC2DC)
is the problem of identifying and localizing bounding boxes
for different cells, followed by counting of each cell class.
MC2DC aids diagnosis of many clinical conditions. For ex-
ample, CBC blood test counts red blood cells, white blood
cells, and platelets, for diagnosing anemia, blood cancer,
and infections [13, 31]. MC2DC over malignant tumor im-
ages helps assess the resistance and sensitivity of cancer
treatments [9]. MC2DC over duodenum biopsies is needed
to compute the ratio of counts of two cell types for diagnos-
ing celiac disease [6]. Cell counting is a tedious process and
*Equal contributionoften leads to significant inter-observer and intra-observer
variations [4, 8]. This motivates the need for an AI system
that can provide robust and reproducible predictions.
Standard object detection models such as Yolo [21],
Faster-RCNN [35] and EfficientDet [44] have achieved
state-of-the-art performance on various object detection set-
tings. However, extending these to detecting cells in med-
ical images poses several challenges. These include lim-
ited availability of annotated datasets, tiny objects of inter-
est (cells) that may be overlapping, similarity in the appear-
ance of different cell types, and skewed cell class distribu-
tion. Due to the non-trivial nature of the problem, MC2DC
models may benefit from insights from trained pathologists,
e.g., via discriminative attributes. For instance, in duode-
num biopsies, intraepithelial lymphocytes (IELs) are struc-
turally smaller, circular, and darker stained, whereas epithe-
lial nuclei (ENs) are bigger, elongated, and lighter. A key
challenge lies in incorporating these expert-insights within
a detection model. A secondary issue is that such insights
may not always be available or may be insufficient – this
motivates additional data-driven features.
We propose a novel deep guided posterior regularization
(DEGPR) framework. Posterior regularization (PR) is an
auxiliary loss [12], which enforces that the posterior distri-
bution of a predictor should mimic the data distribution for
the given features. We call our method deep guided PR,
since we apply it to deep neural models, and it is meant
to formalize the clinical guidance given by pathologists.
DEGPR incorporates PR over two types of features, which
we term explicit and implicit features. Explicit features are
introduced through direct guidance by expert pathologists.
Implicit features are learned feature embeddings for each
class, trained through a supervised contrastive loss [22].
Subsequently, both features are feed into a Gaussian Mix-
ture Model (GMM). D EGPR constrains the distributions
over the predicted features to follow that of the ground truth
features, via a KL divergence loss between them.
We test the benefits of D EGPR over three base object
detection models (Yolov5, Faster-RCNN, EfficientDet) on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23913
Epith
IELFigure 1. Visual dissimilarities between IELs and ENs. ENs (first
row) are lighter stained, bigger and elongated in structure. IELs
(second row) are darker stained, smaller, and circular in shape.
three MC2DC datasets. Of these, two are publicly avail-
able: CoNSeP [15] and MoNuSAC [47]. We additionally
contribute a novel MuCeD dataset for the detection of celiac
disease. MuCeD consists of 55 annotated biopsy images of
the human duodenum, which have a total of 8,600 cell anno-
tations of IELs and ENs. We find that D EGPR consistently
improves detection and counting performance over all base
models on all datasets. For example, on MuCeD, D EGPR
obtains a 3-9% mAP gain for detection and a 10-35% re-
duction in mean absolute error for counting two cell types.
In summary, (a) we propose D EGPR to guide object de-
tection models by exploiting the discriminative visual fea-
tures between different classes of cells; (b) we use super-
vised contrastive learning to learn robust embeddings for
different cell classes, which are then used as implicit fea-
tures for D EGPR; (c) we introduce MuCeD, a dataset of
human duodenum biopsies, which has 8,600 annotated cells
of two types; and (d) we experiment on three datasets, in-
cluding MuCeD, and find that D EGPR strongly improves
detection and counting performance over three baselines.
We release our dataset and code for further research.*
|
Song_Optimal_Proposal_Learning_for_Deployable_End-to-End_Pedestrian_Detection_CVPR_2023 | Abstract
End-to-end pedestrian detection focuses on training
a pedestrian detection model via discarding the Non-
Maximum Suppression (NMS) post-processing. Though a
few methods have been explored, most of them still suffer
from longer training time and more complex deployment,
which cannot be deployed in the actual industrial applica-
tions. In this paper, we intend to bridge this gap and pro-
pose an Optimal Proposal Learning (OPL) framework for
deployable end-to-end pedestrian detection. Specifically,
we achieve this goal by using CNN-based light detector and
introducing two novel modules, including a Coarse-to-Fine
(C2F) learning strategy for proposing precise positive pro-
posals for the Ground-Truth (GT) instances by reducing the
ambiguity of sample assignment/output in training/testing
respectively, and a Completed Proposal Network (CPN) for
producing extra information compensation to further recall
the hard pedestrian samples. Extensive experiments are
conducted on CrowdHuman, TJU-Ped and Caltech, and the
results show that our proposed OPL method significantly
outperforms the competing methods.
| 1. Introduction
Pedestrian detection is a popular computer vision task,
which has been widely employed in many applications
such as robotics [20], intelligent surveillance [39] and au-
tonomous driving [21]. It follows the conventional object
detection pipeline and focuses on the detection of pedes-
trian. To improve the recall of pedestrians, the current popu-
lar pedestrian detectors always generate multiple bounding-
box (bbox) proposals for a Ground-Truth (GT) instance
during testing. And then the Non-Maximum Suppression
(NMS) post-processing technique is used to guarantee the
final precision of detection by removing the duplicated
bboxes.
However, the crowd density is usually high in some real-
world pedestrian detection scenarios, e.g. city centers, rail-
way stations, airports and so on. NMS often performspoorly in these crowd scenes due to the naive duplicate re-
moval of NMS by a single Intersection-over-Union (IoU)
threshold. For example, a lower threshold may cause the
missed detection of some highly overlapped true positives
while a higher threshold could result in more false positives.
Some existing works have attempted to make some im-
provements, e.g. generating more compact bounding boxes
[62, 68], soft suppression strategy [1], learning NMS func-
tion by extra modules [25] and dynamic suppression thresh-
old [33]. However, these works still cannot achieve end-to-
end training and easy deployment in actual industrial ap-
plications. To this end, a straightforward solution is to es-
tablish a fully end-to-end detection pipeline by discarding
NMS. PED [30] and [71] have made some attempts by im-
plementing a NMS-free pipeline for pedestrian detection.
Both of them are query-based methods. Though achiev-
ing higher performances, they still suffer from longer train-
ing time, more complex deployment and larger computation
costs and cannot be actually deployed on the resource lim-
ited devices in industrial applications. Therefore, obtaining
a ‘light and sweet’ end-to-end pedestrian detector remains
important.
Considering the possibility of deployment in actual in-
dustrial applications, performing NMS-free technique upon
the one-stage anchor-free CNN-detector, e.g. FCOS [60], is
more practical and attractive since it is much easier and ef-
ficient to be deployed on resource limited devices with light
computational cost and less pre/post-processing. To achieve
this goal, the CNN-detector should learn to adaptively and
precisely produce true-positive pedestrian proposals at the
correct locations as well as avoiding the duplicates. In gen-
eral object detection, some works [53, 55, 61] propose to
replace the commonly used one-to-many label assignment
strategy with one-to-one label assignment during training.
Specifically, for each GT instance, only one proposal will
be assigned as positive sample while other candidate pro-
posals are assigned as negatives.
However, this solution involves two challenges as fol-
lows: 1) Problem of ambiguous positive proposals for a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3250
larger instance . Specifically, the ideal produced positive
proposal should get a much higher confidence score than
other near-by candidate proposals for the same GT instance.
However, in fact, the extracted features of close-by propos-
als are similar since they usually share some common pix-
els of the same instance. It is difficult for the classification
branch to find a compact classification decision-boundary
to separate them apart. As a result, it confounds the further
model optimization and reduces the precision of the output
proposals; 2) Poor representation ability for tiny and oc-
cluded instances . Specifically, various scales and occlusion
patterns of pedestrians involve a wide range of appearance
changes. It is difficult to guarantee the confidence outputs
from different appearances to be consistent with each other.
Hard pedestrian samples with small scales or in heavy oc-
clusion states are difficult to attain high confidence scores as
those easy samples. Moreover, one-to-one label assignment
only provides fewer positive training samples for learning
these hard instances, further increasing the learning diffi-
culty.
To tackle these issues, this paper proposes the Optimal
Proposal Learning (OPL) framework for deployable end-
to-end pedestrian detection. In OPL, we establish the over-
all framework upon CNN-based detector and then propose
a Coarse-to-Fine (C2F) learning strategy for the classifica-
tion branch so as to mitigate the issue of ambiguous positive
proposals. Specifically, it is mainly achieved by progres-
sively decreasing the average number of the positive sam-
ples assigned to each GT instances. C2F gives the classifi-
cation branch chances of exploring the best classification
decision-boundary via progressive boundary refinements.
Moreover, to ease the problem of poor representation abil-
ity for hard instances, we propose a Completed Proposal
Network (CPN). CPN is used to provide extra information
compensation for the hard proposals and to give them more
chances to be detected. Thus, we can get the reliable confi-
dence scores for each proposal by combining the outputs of
classification branch and CPN. The main contributions are
summarized as follows:
• We propose Optimal Proposal Learning (OPL) frame-
work for deployable end-to-end pedestrian detection.
• We design a Coarse-to-Fine (C2F) learning strategy,
which progressively decreases the average number of
positive samples assigned to each GT instance during
training. C2F aims to give model chances of adaptively
producing the precise positive samples without ambi-
guity.
• We propose a Completed Proposal Network (CPN)
that can automatically provide extra compensation for
the hard samples with different appearances. CPN is
mainly used to further refine the proposal scores such
that all pedestrians can be successfully recalled.Extensive experiments conducted on CrowdHuman [49],
TJU-Ped [45] and Caltech [16] demonstrate the superiority
of the proposed OPL.
|
Suo_MixSim_A_Hierarchical_Framework_for_Mixed_Reality_Traffic_Simulation_CVPR_2023 | Abstract
The prevailing way to test a self-driving vehicle (SDV) in
simulation involves non-reactive open-loop replay of real
world scenarios. However, in order to safely deploy SDVs
to the real world, we need to evaluate them in closed-loop.
Towards this goal, we propose to leverage the wealth of
interesting scenarios captured in the real world and make
them reactive and controllable to enable closed-loop SDV
evaluation in what-if situations. In particular, we present
MIXSIM, a hierarchical framework for mixed reality traf-
fic simulation. MIXSIMexplicitly models agent goals as
routes along the road network and learns a reactive route-
conditional policy. By inferring each agent’s route from
the original scenario, MIXSIMcan reactively re-simulate
the scenario and enable testing different autonomy sys-
tems under the same conditions. Furthermore, by vary-
ing each agent’s route, we can expand the scope of test-
ing to what-if situations with realistic variations in agent
behaviors or even safety critical interactions. Our exper-
iments show that MIXSIMcan serve as a realistic, reac-
tive, and controllable digital twin of real world scenar-
ios. For more information, please visit the project website:
https://waabi.ai/research/mixsim/
| 1. Introduction
During your commute to work, you encounter an erratic
driver. As you both approach a merge, the erratic driver sud-
denly accelerates and nearly collides with you. Your heart
skips a beat and you feel uneasy. While you’ve avoided
an accident, you can’t help but wonder: What if the erratic
driver was a bit more aggressive? Would you have needed
to swerve? Would other drivers be able to react to you?
Having the ability to test a self-driving vehicle (SDV)
in these what-if situations would be transformational for
safely deploying SDVs to the real world. However, the cur-
rent approach for testing SDVs in simulation lacks this abil-
*Indicates equal contribution.
Figure 1. In mixed reality traffic simulation , given a real world
scenario, we aim to build a reactive and controllable digital twin
of how its traffic agents behave. This enables us to re-simulate the
original scenario and answer what-if questions like: What if the
SDV lane changes? What if the agent cuts in front of the SDV?
ity. In particular, the self-driving industry largely relies on
non-reactive open-loop replay of real world scenarios; traf-
fic agents do not react to the SDV and so the SDV cannot
observe the consequences of its actions. This limits the re-
alism and interpretability of the tests. Clearly, naively re-
playing real world scenarios in simulation is not enough!
In this paper, we propose to leverage the wealth of inter-
esting scenarios captured in the real world and make them
reactive and controllable to enable closed-loop SDV evalu-
ation in what-if situations. We call this task mixed reality
traffic simulation. Specifically, given a recorded scenario,
we aim to build a reactive and controllable digital twin of
how its traffic agents behave (see Fig. 1). The digital twin
should preserve the high-level behaviors and interactions
of the original scenario; e.g., a driver cutting in front of
the SDV . But it should also react to changes in the envi-
ronment in a realistic manner. This allows us to reactively
re-simulate the original scenario to test different autonomy
systems in the same conditions. It also expands the scope of
testing to include what-if situations with realistic variations
of agent behaviors or even safety critical interactions.
To incorporate reactivity into replaying recorded scenar-
ios, the self-driving industry commonly relies on heuristic
models [ 18,19,21,38] and trajectory optimization-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9622
methods [ 14,15]. Complementary to this, [ 39] use black
box optimization over agent trajectories to find safety criti-
cal variations of the original scenario. The common pitfall
of these methods is that the resulting scenarios do not ex-
hibit human-like driving behaviors. The large sim-to-real
gap precludes us from drawing meaningful conclusions of
how the SDV will behave in the real world. On the other
hand, recent works [ 2,17,36] capture human-like driving
by learning generative models of traffic scenarios from real
data but can only generate free-flow traffic given the initial
scenario context, without preserving the behaviors and in-
teractions in the original scenario. This motivates the need
for a simulation framework that learns human-like driving
while being controllable at the behavior level.
Towards this goal, we propose M IXSIM, a hierarchical
framework for mixed reality traffic simulation. In our ap-
proach, we explicitly disentangle an agent’s high-level goal
(e.g., taking an off-ramp) from its reactive low-level ma-
neuvers ( e.g., braking to avoid collisions). We use routes
along the road network as the goal representation and learn
a reactive route-conditional policy to recover human-like
driving behavior. This enables high-level controllability
via specifying the agent’s route, while the low-level pol-
icy ensures realistic interaction in closed-loop simulation.
MIXSIMre-simulates a recorded scenario by first inferring
each agent’s route from its original trajectory and then un-
rolling the route-conditional policy. Furthermore, it enables
synthesizing what-if situations by conditioning on routes
sampled from a learned route-generating policy or routes
optimized to induce safety critical interactions.
To understand the suitability of M IXSIMfor mixed real-
ity traffic simulation, we conduct an analysis of its sim2real
domain gap on both urban and highway traffic scenarios.
Our experiments show that M IXSIMexhibits greater real-
ism, reactivity, and controllability than the competing base-
lines. Notably, M IXSIMachieves the lowest reconstruc-
tion error when re-simulating a given scenario and the low-
est collision rate when reacting to diverse SDV behaviors.
We also demonstrate M IXSIM’s ability to simulate use-
ful what-if scenarios for autonomy development. Specifi-
cally, M IXSIMcan sample diverse yet realistic what-if sit-
uations that cover the space of what could have happened
and generate safety critical scenarios that are far more re-
alistic than existing methods. Altogether, M IXSIMunlocks
closed-loop SDV evaluation in what-if scenarios of varying
severity. This represents an exciting first step towards a new
paradigm for offline autonomy evaluation.
|
Tan_SMOC-Net_Leveraging_Camera_Pose_for_Self-Supervised_Monocular_Object_Pose_Estimation_CVPR_2023 | Abstract
Recently, self-supervised 6D object pose estimation,
where synthetic images with object poses (sometimes jointly
with un-annotated real images) are used for training, has
attracted much attention in computer vision. Some typical
works in literature employ a time-consuming differentiable
renderer for object pose prediction at the training stage, so
that (i) their performances on real images are generally lim-
ited due to the gap between their rendered images and real
images and (ii) their training process is computationally ex-
pensive. To address the two problems, we propose a novel
Network for Self-supervised Monocular Object pose esti-
mation by utilizing the predicted Camera poses from un-
annotated real images, called SMOC-Net. The proposed
network is explored under a knowledge distillation frame-
work, consisting of a teacher model and a student model.
The teacher model contains a backbone estimation module
for initial object pose estimation, and an object pose refiner
for refining the initial object poses using a geometric con-
straint (called relative-pose constraint) derived from rela-
tive camera poses. The student model gains knowledge for
object pose estimation from the teacher model by impos-
ing the relative-pose constraint. Thanks to the relative-pose
constraint, SMOC-Net could not only narrow the domain
gap between synthetic and real data but also reduce the
training cost. Experimental results on two public datasets
demonstrate that SMOC-Net outperforms several state-of-
the-art methods by a large margin while requiring much less
training time than the differentiable-renderer-based meth-
ods.
| 1. Introduction
Monocular 6D object pose estimation is a challenging
task in the computer vision field, which aims to estimate
object poses from single images. According to whether realimages with ground-truth object poses are given for model
training, the existing works for monocular object pose es-
timation in literature could be divided into two categories:
fully-supervised methods [17, 35] which are trained by uti-
lizing annotated real images with ground-truth object poses,
and self-supervised methods [16, 34] which are trained by
utilizing synthetic images with object poses (sometimes
jointly with un-annotated real images). Since it is very time-
consuming to obtain high-quality object poses as ground
truth, self-supervised monocular object pose estimation has
attracted increasing attention recently [16, 33, 39].
Some existing methods for self-supervised monocular
object pose estimation [16, 31] use only synthetic images
with object poses (which are generated via Blender [22]
or some other rendering tools [24, 29]) for training. How-
ever, due to the domain gap between real and synthetic
data, the performances of these self-supervised methods are
significantly lower compared to the fully-supervised meth-
ods [22, 35]. Addressing this domain gap problem, a few
recent self-supervised methods [33, 34, 39] jointly use syn-
thetic images with object pose and un-annotated real images
at their training stage, where a differentiable renderer [19] is
introduced to provide a constraint on the difference between
real and rendered images. Although these methods could al-
leviate the domain gap problem by utilizing the introduced
differentiable renderer, they still have to be confronted with
the following two problems: (i) There still exists a notice-
able gap between real images and rendered images by the
differentiable renderer, so that their performances on object
pose estimation are still limited; (ii) Much time has to be
spent on differentiable rendering during training, so that the
training costs of these methods are quite heavy.
To address the above problems, this paper proposes a
novel Network for Self-supervised Monocular Object pose
estimation by utilizing the predicted Camera poses from un-
annotated real images, called SMOC-Net. The SMOC-Net
is designed via the knowledge distillation technique, con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21307
sisting of a teacher model and a student model. Under the
teacher model, a backbone pose estimation module is in-
troduced to provide an initial estimation on the pose of the
image object. Then, a geometric constraint (called relative-
pose constraint) on object poses is mathematically derived
from relative camera poses which are calculated by a typ-
ical structure from motion method (here, we straightfor-
wardly use COLMAP [25, 26]), and a camera-pose-guided
refiner is further explored to refine the initial object pose
based on this constraint. The student model simply employs
the same architecture as the backbone estimation module of
the teacher model, and it learns knowledge from the teacher
model by imposing the relative-pose constraint, so that it
could estimate object poses as accurately and fast as possi-
ble. Once the proposed SMOC-Net has been trained, only
its student model is used to predict the object pose from an
arbitrary testing image.
In sum, the main contributions in this paper include:
1. We design the relative-pose constraint on object
poses under the knowledge distillation framework for self-
supervised object pose estimation. And it could narrow the
domain gap between synthetic and real data to some extent.
2. According to the designed relative-pose constraint,
we explore the camera-pose-guided refiner, which is able to
refine low-accuracy object poses effectively.
3. By jointly utilizing the camera-pose-guided refiner
and the above object pose constraint, we propose the
SMOC-Net for monocular 6D object pose estimation. Ex-
perimental results in Sec. 4 demonstrate that the proposed
SMOC-Net does not only outperform several state-of-the-
art methods when only synthetic images with object poses
and un-annotated real images are used for training, but also
perform better than three state-of-the-art fully-supervised
methods on the public dataset LineMOD [8].
|
Sun_Single_Image_Backdoor_Inversion_via_Robust_Smoothed_Classifiers_CVPR_2023 | Abstract
Backdoor inversion, the process of finding a backdoor
“trigger” inserted into a machine learning model, has be-
come the pillar of many backdoor detection and defense
methods. Previous works on backdoor inversion often re-
cover the backdoor through an optimization process to flip
a support set of clean images into the target class. How-
ever, it is rarely studied and understood how large this
support set should be to recover a successful backdoor.
In this work, we show that one can reliably recover the
backdoor trigger with as few as a single image. Specifi-
cally, we propose the SmoothInv method, which first con-
structs a robust smoothed version of the backdoored clas-
sifier and then performs guided image synthesis towards
the target class to reveal the backdoor pattern. SmoothInv
requires neither an explicit modeling of the backdoor via
a mask variable, nor any complex regularization schemes,
which has become the standard practice in backdoor in-
version methods. We perform both quantitaive and qual-
itative study on backdoored classifiers from previous pub-
lished backdoor attacks. We demonstrate that compared to
existing methods, SmoothInv is able to recover successful
backdoors from single images, while maintaining high fi-
delity to the original backdoor. We also show how we iden-
tify the target backdoored class from the backdoored clas-
sifier. Last, we propose and analyze two countermeasures
to our approach and show that SmoothInv remains robust
in the face of an adaptive attacker. Our code is available at
https://github.com/locuslab/smoothinv .
| 1. Introduction
Backdoor attacks [3–5, 9, 11, 14, 29, 30, 45], the prac-
tice of injecting a covert backdoor into a machine learn-
ing model for inference time manipulation, have become
a popular threat model in machine learning security com-
munity. Given the pervasive threat of backdoor attacks,
e.g. on self-supervised learning [7, 35], language mod-
elling [28, 47] and 3d point cloud [46], there has been
growing research interest in reverse engineering the back-
door given a backdoored classifier. This reverse engineer-
ing process, often called backdoor inversion [39], is cru-
Figure 1. Single Image Backdoor Inversion : Given a back-
doored classifier (sampled randomly from TrojAI benchmark [1]),
SmoothInv takes a single clean image ( left) as input and recovers
the hidden backdoor ( right ) with high visual similarity to the orig-
inal backdoor ( middle ).
cial in many backdoor defense [15, 32] and detection meth-
ods [12,17,18,20,21,24,31,38,43,44]. A successful back-
door inversion method should be able to recover a backdoor
satisfying certain requirements. On one hand, the reversed
backdoor should be successful, meaning that it should still
have a high attack success rate (ASR) on the backdoored
classifier. On the other hand, it should be faithful, where
the reversed backdoor should be close, e.g. in visual simi-
larity, to the true backdoor.
Given a backdoored classifier fband a support set Sof
clean images, a well-established framework for backdoor
inversion solves the following optimization problem:
min
m,pEx∈S
L(fb(ϕ(x)), yt)] +R(m,p) (1)
where ϕ(x) = (1 −m)⊙x+m⊙p
where variables mandprepresent a mask and pertur-
bation vectors respectively, ytdenotes the target label, ⊙
is element-wise multiplication, L(·,·)is the cross-entropy
function and R(·,·)is a regularization term. Since it was
first proposed in [43], Equation 1 has been adopted and ex-
tended by many backdoor inversion methods. While most
of these works focus on designing new regularization term
Rand backdoor modelling function ϕ, it is often assumed
by default that there is a set Sof clean images available. In
practice, however, we may not have access to many clean
images beforehind. Thus we are motivated by the following
question:
Can we perform backdoor inversion with as few clean
images as possible?
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8113
In this work, we show that one single image is enough
for backdoor inversion. On a high level, we view back-
door attacks as encoding backdoored images into the data
distribution of the target class during training. Then, we
hope to reconstruct these encoded backdoored images via a
class-conditional image synthesis process to generate exam-
ples from the target class. Though the idea of using image
synthesis is straight-forward, it is not immediately obvious
how to do this in practice given a backdoored classifier. For
instance, directly minimizing the classification loss of the
target class reduces to random adversarial noise, as shown
in previous adversarial attacks literature [40]. Additionally,
generative models such as GANs [13] are not practical in
this setting, since we would need to train a separate gen-
erative model for each backdoored classifier, and we don’t
usually have access to the training set for the task of back-
door inversion.
Our approach, which we call SmoothInv, synthesize
backdoor patterns by inducing salient gradients of back-
door features via a special robustification process, inserted
to convert a standard non-robust model to an adversarially
robust one. Specifically, SmoothInv first constructs a ro-
bust smoothed version of the backdoored classifier, which
is provably robust to adversarial perturbations. Once we
have the robust smoothed classifier, we perform guided im-
age synthesis to recover backdoored images that the robust
smoothed classifier perceive as the target class.
Compared to the existing inversion framework in Equa-
tion 1, our approach uses only a single image as the support
set. Single image backdoor inversion has not been shown
possible for previous backdoor inversion methods as they
usually require multiple clean instances for their optimiza-
tion methods to give reasonable results. Moreover, our ap-
proach has the added benefit of simplicity: we do not in-
troduce any custom-designed optimization constraints, e.g.
mask variables and regularization. Most importantly, the
backdoor found by our approach has remarkable visual
resemblance to the original backdoor, despite being con-
structed from a single image. In Figure 1, we demonstrate
such visual similarity for a backdoored classifier.
We evaluate our method on a collection of backdoored
classifiers from previously published studies, where we ei-
ther download their pretrained models or train a replicate
using the publicly released code. These collected back-
doored classifiers cover a diverse set of backdoor condi-
tions, e.g., patch shape, color, size and location. We evalu-
ate SmoothInv on these backdoored classifiers for single im-
age backdoor inversion and show that SmoothInv finds both
successful and faithful backdoors from single images. We
also show how we distinguish the target backdoored class
from normal classes, where our method (correctly) is un-
able to find an effective backdoor for the latter. Last, we
evaluate attempts to circumvent our approach and show that
SmoothInv is still robust under this setting.
Figure 2. Backdoors of the backdoored classifiers we consider
in this paper (listed in Table 1). The polygon trigger (leftmost)
is a representative backdoor used in the TrojAI benchmark. The
pixel pattern (9 pixels) and single pixel backdoors used in [3] are
overlaid on a background blue image for better visualization.
|
Xia_Structured_Sparsity_Learning_for_Efficient_Video_Super-Resolution_CVPR_2023 | Abstract
The high computational costs of video super-resolution
(VSR) models hinder their deployment on resource-limited
devices, e.g., smartphones and drones. Existing VSR mod-
els contain considerable redundant filters, which drag down
the inference efficiency. To prune these unimportant fil-
ters, we develop a structured pruning scheme called Struc-
tured Sparsity Learning (SSL) according to the properties
of VSR. In SSL, we design pruning schemes for several key
components in VSR models, including residual blocks, re-
current networks, and upsampling networks. Specifically,
we develop a Residual Sparsity Connection (RSC) scheme
for residual blocks of recurrent networks to liberate prun-
ing restrictions and preserve the restoration information.
For upsampling networks, we design a pixel-shuffle prun-
ing scheme to guarantee the accuracy of feature channel-
space conversion. In addition, we observe that pruning
error would be amplified as the hidden states propagate
along with recurrent networks. To alleviate the issue, we
design Temporal Finetuning (TF). Extensive experiments
show that SSL can significantly outperform recent methods
quantitatively and qualitatively. The code is available at
https://github.com/Zj-BinXia/SSL .
| 1. Introduction
Video super-resolution (VSR) aims to generate a
high-resolution (HR) video from its corresponding low-
resolution (LR) observation by filling in missing details.
With the popularity of intelligent edge devices such as
smartphones and small drones, performing VSR on these
devices is in high demand. Although a variety of VSR net-
works [20, 24, 29, 44, 51] can achieve great performance,
these models are usually difficult to be deployed on edge
devices with limited computation and memory resources.
To alleviate this issue, we explore a new direction for
effective and efficient VSR. To reduce the redundancy of
Conv kernels [4, 5, 36, 38] obtaining a more efficient VSR
network, we develop a neural network pruning scheme for
the VSR task for the first time. Since structured prun-
ing [14, 23, 46, 57] (focusing on filter pruning) can achieve
*Corresponding Authoran actual acceleration [41,46] superior to unstructured prun-
ing [11,12] (focusing on weight-element pruning), we adopt
structured pruning principle to develop our VSR pruning
scheme. Given a powerful VSR network, our pruning
scheme can find submodels under presetting pruning rate
without significantly compromising performance.
Structured pruning is a general concept, and designing
a concrete pruning scheme for VSR networks is challeng-
ing.(1)Recurrent networks are widely used in VSR models
to extract temporal features, consisting of residual blocks
(e.g., BasicVSR [2] has 60 residual blocks). However, it
is hard to prune the residual blocks because the skip and
residual connections ought to share the same indices [23]
(Fig. 1 (a)). As shown in Fig. 1 (b), quite a few structured
pruning schemes [23,34] do not prune the last Conv layer of
the residual blocks, which restricts the pruning space. Re-
cently, as shown in Fig. 1 (c), ASSL [57] and SRPN [58]
introduce regularization and prune the same indices on skip
and residual connections to keep channel alignment (local
pruning scheme, i.e., each layer pruning the same ratio of
filters). However, ASSL and SRPN still cannot achieve the
potential of pruning residual blocks on recurrent networks.
The recurrent networks take the previous output as later in-
put (Fig. 2 (a)). This requires the pruned indices of the
first and last Convs in recurrent networks to be the same.
But ASSL and SRPN cannot guarantee filter indices are
aligned. Besides, many SR methods [45, 56] have shown
that the information contained in front Conv layers can help
the restoration feature extraction of later Conv layers. Thus,
we design a Residual Sparsity Connection (RSC) for VSR
recurrent networks, which preserves all channels of the in-
put and output feature maps and selects the important chan-
nels for operation (Fig. 1 (d)). Compared with other pruning
schemes [57, 58], RSC does not require the pruned indices
of the first and last Convs of recurrent networks to be the
same, can preserve the information contained in all layers,
and liberates the pruning space of the last Conv of the resid-
ual blocks without adding extra calculations. Notably, RSC
can prune residual blocks globally ( i.e., the filters in various
layers are compared together to remove unimportant ones).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22638
𝑾𝒊𝑭𝒊𝑾𝒊"𝟏123456
𝑾𝒊𝑾𝒊"𝟏𝑭𝒊"𝟏623
𝑾𝒊𝑾𝒊"𝟏𝑭𝒊"𝟏15(a) Residual Block.
(c) Pruning with Structure Alignment.𝜸𝒋𝜸𝒋"𝟏𝑭𝒊"𝟏%
𝜸𝒋&𝟏𝜸𝒋(d) Residual Sparsity ConnectionScheme(RSC,Ours).𝑾𝒊𝑾𝒊"𝟏(b) Skipping Pruningthe Last Conv.𝑭𝒊624
𝜸𝒋"𝟏163452
246246123456123456
246236246236
23624623624615623623Skip Connection
𝑭𝒊%Residual Connection𝜸𝒋𝜸𝒋"𝟏163452123456123456𝑭𝒊%163452𝑭𝒊"𝟏13452𝑭𝒊"𝟏%
13452𝑭𝒊%612345613452𝑭𝒊"𝟏13452𝑭𝒊"𝟏%6
66
236𝑭𝒊62413452𝑭𝒊"𝟏%6𝑭𝒊624………………
13452𝑭𝒊%6Select15Select……ConvConvConvConvFigure 1. Illustration of different schemes for pruning residual blocks of recurrent networks. (a)Structure of the residual block in the VSR
network. (b)The residual block pruning schemes [7, 23, 42] do not prune the last Conv. (c)ASSL [57] and SRPN [58] prunes the same
indices on skip and residual connections to keep channel alignment, which abandons some channels of input and output feature maps. (d)
RSC preserves all channels of input and output feature maps, which does not need to align the pruned indices on the first and last Convs in
recurrent networks, can fully use restoration information, and can prune the first and last Convs of residual blocks without restrictions.
(2)We observe that the upsampling network accounts
for22% of the total calculations in BasicVSR [2], which
is necessary to be pruned to reduce redundancy. Since the
pixel-shuffle [37] operation in VSR networks converts the
channels to space, pruning the pixel-shuffle without any re-
strictions would cause the channel-space conversion to fail.
Thus, we specially design a pixel-shuffle pruning scheme
by taking four consecutive filters as the pruning unit for 2 ×
pixel-shuffle. (3)Furthermore, we observe that the error
of pruning VSR networks would accumulate with propaga-
tion steps increasing along with recurrent networks, which
limits the efficiency and performance of pruning. Thus, we
further introduce Temporal Finetuning (TF) to constrain the
pruning error accumulation in recurrent networks. Overall,
our main contributions are threefold:
• Our work is necessary and timely. There is an urgent
need to compress VSR models for deployment. To the
best of our knowledge, we are one of the first to design
a structured pruning scheme for VSR.
• We propose an integral VSR pruning scheme called
Structured Sparsity Learning (SSL) for various com-
ponents of VSR models, such as residual blocks, re-
current networks, and pixel-shuffle operations.
• We employ SSL to train VSR models, which surpass
recent pruning schemes and lightweight VSR models.
|
Van_Hoorick_Tracking_Through_Containers_and_Occluders_in_the_Wild_CVPR_2023 | Abstract
Tracking objects with persistence in cluttered and dy-
namic environments remains a difficult challenge for com-
puter vision systems. In this paper, we introduce TCOW ,
a new benchmark and model for visual tracking through
heavy occlusion and containment. We set up a task where
the goal is to, given a video sequence, segment both the pro-
jected extent of the target object, as well as the surrounding
container or occluder whenever one exists. To study this
task, we create a mixture of synthetic and annotated real
datasets to support both supervised learning and structured
evaluation of model performance under various forms of
task variation, such as moving or nested containment. We
evaluate two recent transformer-based video models and
find that while they can be surprisingly capable of track-
ing targets under certain settings of task variation, there re-
mains a considerable performance gap before we can claim
a tracking model to have acquired a true notion of object
permanence.
| 1. Introduction
The interplay between containment and occlusion can
present a challenge to even the most sophisticated visual
reasoning systems. Consider the pictorial example in Fig-
ure 1a. Given four frames of evidence, where is the red ball
in the final frame? Could it be anywhere else? What visual
evidence led you to this conclusion?
In this paper, we explore the problem of tracking and
segmenting a target object as it becomes occluded or con-
tained by other dynamic objects in a scene. This is an
essential skill for a perception system to attain, as objects
of interest in the real world routinely get occluded or con-
tained. Acquiring this skill could, for example, help a robot
to better track objects around a cluttered kitchen or ware-
house [10], or a road agent to understand traffic situations
more richly [62]. There are also applications in augmented
reality, smart cities, and assistive technology.
It has long been known that this ability, commonly re-
ferred to as object permanence, emerges early on in a child’s
lifetime (see e.g. [2, 3, 5–8, 49, 54–57]). But how far away
Figure 1. Containment (a) and occlusion (b) happen constantly
in the real world. We introduce a novel task and dataset for evalu-
ating the object permanence capabilities of neural networks under
diverse circumstances.
are computer vision systems from attaining the same?
To support the study of this question, we first propose a
comprehensive benchmark video dataset of occlusion- and
containment-rich scenes of multi-object interactions. These
scenes are sourced from both simulation, in which ground
truth masks can be perfectly synthesized, and from the real
world, which we hand-annotate with object segments. To
allow for an extensive analysis of the behaviours of exist-
ing tracking systems, we ensure that our evaluation set cov-
ers a wide range of different types of containment and oc-
clusion. For example, even though an object undergoing
containment is already a highly non-trivial event, contain-
ers can move, be nested, be deformable, become occluded,
and much more. Occlusion can introduce considerable un-
certainty, especially when the occludee, the occluder, or the
camera are in motion on top of everything else.
Using our dataset, we explore the performance of two re-
cent state-of-the-art video transformer architectures, which
we repurpose for the task of tracking and segmenting a
target object through occlusion and containment in RGB
video. We show through careful quantitative and qualitative
analyses that while our models achieve reasonable tracking
performance in certain settings, there remains significant
room for improvement in terms of reasoning about object
permanence in complicated, realistic environments. By re-
leasing our dataset and benchmark along with this paper, we
hope to draw attention to this challenging milestone on the
path toward strong spatial reasoning capabilities.
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13802
|
Wang_Learning_To_Detect_and_Segment_for_Open_Vocabulary_Object_Detection_CVPR_2023 | Abstract
Open vocabulary object detection has been greatly ad-
vanced by the recent development of vision-language pre-
trained model, which helps recognize novel objects with
only semantic categories. The prior works mainly focus on
knowledge transferring to the object proposal classification
and employ class-agnostic box and mask prediction. In this
work, we propose CondHead , a principled dynamic network
design to better generalize the box regression and mask seg-
mentation for open vocabulary setting. The core idea is to
conditionally parameterize the network heads on semantic
embedding and thus the model is guided with class-specific
knowledge to better detect novel categories. Specifically,
CondHead is composed of two streams of network heads,
the dynamically aggregated head and dynamically gener-
ated head. The former is instantiated with a set of static
heads that are conditionally aggregated, these heads are
optimized as experts and are expected to learn sophisticated
prediction. The latter is instantiated with dynamically gen-
erated parameters and encodes general class-specific infor-
mation. With such a conditional design, the detection model
is bridged by the semantic embedding to offer strongly gen-
eralizable class-wise box and mask prediction. Our method
brings significant improvement to the state-of-the-art open
vocabulary object detection methods with very minor over-
head, e.g., it surpasses a RegionClip model by 3.0 detection
AP on novel categories, with only 1.1% more computation.
| 1. Introduction
Given the semantic object categories of interest, object
detection aims at localizing each object instance from the
input images. The prior research efforts mainly focus on the
close-set setting, where the images containing the interested
object categories are annotated and used to train a detector.
The obtained detector only recognizes object categories that
are annotated in the training set. In such a setting, more data
needs to be collected and annotated if novel category1needs
1we treat category andclass interchangeably in this paper
Figure 1. Illustration of our main intuition. Given the object
proposals, the bounding box regression and mask segmentation
learned from some object categories could generalize to the tar-
get category. For example, the knowledge learned from a chicken
could help detect and segment the long thin feet and the small head
of an ibis (upper row). Similarly for the hairbrush, the knowledge
learned from the toothbrush could better handle the extreme aspect
ratio and occlusion from the hand (lower row).
to be detected. However, data collection and annotation are
very costly for object detection, which raises a significant
challenge for traditional object detection methods.
To address the challenge, the open vocabulary object de-
tection methods are widely explored recently, these meth-
ods [1, 7, 9, 11, 14, 21, 23, 30–32] aim at generalizing ob-
ject detection on novel categories by only training on a set
of labeled categories. The core of these methods is trans-
ferring the strong image-text aligned features [16, 22] to
classify objects of novel categories. To achieve bound-
ing box regression and mask segmentation on novel cat-
egories, they simply employ the class-agnostic network
heads. Although class agnostic heads can be readily ap-
plied to novel target object categories, they offer limited
capacity to learn category-specific knowledge like object
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7051
Figure 2. Overview of CondHead. To detect objects of novel categories, we aim at conditionally parameterizing the bounding box
regression and mask segmentation based on the semantic embedding, which is strongly correlated with the visual feature and provides
effective class-specific cues to refine the box and predict the mask.
shape, and thus provide sub-optimal performance. On the
other hand, training class-wise heads is not feasible as we
do not have bounding box and mask annotation for the tar-
get categories. As shown in Figure 1, our intuition is that
the class-wise knowledge could naturally generalize across
object categories, and may be leveraged to achieve much
higher quality box regression and mask segmentation on the
target categories, in a category-specific manner. However,
we find a brute force way of training class-wise heads on the
base categories and manually gathering the class-specific
prediction with closet semantic similarity during inference
provides limited gain. The reason is that there still remains
gap between the base and target categories, such as appear-
ance and context.
Motivated by the strong semantic-visual aligned repre-
sentation [7,11,31,32] in open vocabulary object detection,
we propose to exploit the semantic embedding as a condi-
tional prior to parameterize class-wise bounding box regres-
sion and mask segmentation. Such conditioning is learned
on base categories and easily generalizes to novel cate-
gories with the semantic embedding. Our method, named
CondHead , is based on dynamic parameter generation of
neural networks [5, 6, 15, 17, 29]. To achieve strong ef-
ficiency, it exploits both large complex network head for
their representative power and small light network head for
their efficiency. The complex head is employed by condi-
tional weight aggregation over a set of static heads. The
light head is employed by dynamically basing its param-
eters on the semantic embedding. The final prediction is
obtained by combining the predictions from the two stream
results. Through optimization on the base categories, the
set of static heads are expected to learn sophisticated expert
knowledge to cope with complex shapes and appearance,
the dynamic head is endowed with general class-specific
knowledge such as color and context.
Our CondHead is flexible regarding the choice of
semantic-visual representation. The stronger quality of the
aligned representation is expected to bring higher perfor-mance, as the conditional knowledge from the semantic
embedding could generalize better to the target visual fea-
tures. This is demonstrated by the clear grow of improve-
ment over three baselines with increasing quality of pre-
trained semantic-visual encoder networks, OVR-CNN [31],
ViLD [11] and RegionCLIP [32], on both COCO [19] and
LVIS [12] datasets. Remarkably, CondHead brings an av-
erage 2.8 improvement w.r.t both box and mask AP for
the strong RegionCLIP baseline, with only about 1% more
computation. We also demonstrate intriguing qualitative re-
sults, showing how the semantic conditioning positively af-
fects the regression and segmentation tasks.
Our contributions are three-fold. 1) To the best of our
knowledge, we are the first to leverage semantic-visual
aligned representation for open vocabulary box regression
and mask segmentation. 2) We design a differentiable
semantic-conditioned head design to efficiently bridge the
strong category-specific prediction learned on base cate-
gories to the target novel categories. 3) We extensively val-
idate the proposed method on various benchmark datasets
and conduct thorough ablation and analysis to understand
how the semantic conditioning helps detect and segment the
novel categories.
|
Wu_Aligning_Bag_of_Regions_for_Open-Vocabulary_Object_Detection_CVPR_2023 | Abstract
Pre-trained vision-language models (VLMs) learn to
align vision and language representations on large-scale
datasets, where each image-text pair usually contains a bag
of semantic concepts. However, existing open-vocabulary
object detectors only align region embeddings individually
with the corresponding features extracted from the VLMs.
Such a design leaves the compositional structure of semantic
concepts in a scene under-exploited, although the structure
may be implicitly learned by the VLMs. In this work, we
propose to align the embedding of bag of regions beyond in-
dividual regions. The proposed method groups contextually
interrelated regions as a bag. The embeddings of regions
in a bag are treated as embeddings of words in a sentence,
and they are sent to the text encoder of a VLM to obtain the
bag-of-regions embedding, which is learned to be aligned
to the corresponding features extracted by a frozen VLM.
Applied to the commonly used Faster R-CNN, our approach
surpasses the previous best results by 4.6 box AP 50and 2.8
mask AP on novel categories of open-vocabulary COCO
and LVIS benchmarks, respectively. Code and models are
available at https://github.com/wusize/ovdet .
| 1. Introduction
A traditional object detector can only recognize categories
learned in the training phase, restricting its application in
the real world with a nearly unbounded concept pool. Open-
vocabulary object detection (OVD), a task to detect objects
whose categories are absent in training, has drawn increasing
research attention in recent years.
A typical solution to OVD, known as the distillation-
based approach, is to distill the knowledge of rich and
unseen categories from pre-trained vision-language mod-
els (VLMs) [21, 38]. In particular, VLMs learn aligned
image and text representations on large-scale image-text
pairs (Fig. 1(a)). Such general knowledge is beneficial for
OVD. To extract the knowledge, most distillation-based ap-
*Corresponding author.
Figure 1. (a)Typical vision-language models (VLMs) learn to align
representations of images and captions with rich compositional
structure. (b)Existing distillation-based object detectors align each
individual region embedding to features extracted by the frozen
image encoder of VLMs. (c)Instead, the proposed method aligns
the embedding of bag of regions . The region embeddings in a bag
are projected to the word embedding space (dubbed as pseudo
words), formed as a sentence, and then sent to the text encoder
to obtain the bag-of-regions embedding, which is aligned to the
corresponding image feature extracted by the frozen VLMs.
proaches [10, 15, 52] align each individual region embed-
ding to the corresponding features extracted from the VLM
(Fig. 1(b)) with some carefully designed strategies.
We believe VLMs have implicitly learned the inherent
compositional structure of multiple semantic concepts ( e.g.,
co-existence of stuff and things [1, 23]) from a colossal
amount of image-text pairs. A recent study, MaskCLIP [57],
leverages such a notion for zero-shot segmentation. Existing
distillation-based OVD approaches, however, have yet to
fully exploit the compositional structures encapsulated in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15254
VLMs. Beyond the distillation of individual region embed-
ding, we propose to align the embedding of BAg of RegiONs ,
dubbed as BARON . Explicitly learning the co-existence of
visual concepts encourages the model to understand the
scene beyond just recognizing isolated individual objects.
BARON is easy to implement. As shown in Fig. 1(c),
BARON first samples contextually interrelated regions to
form a ‘bag’. Since the region proposal network (RPN) is
proven to cover potential novel objects [15,58], we explore a
neighborhood sampling strategy that samples boxes around
region proposals to help model the co-occurrence of a bag of
visual concepts. Next, BARON obtains the bag-of-regions
embeddings by projecting the regional features into the word
embedding space and encoding these pseudo words with
the text encoder (TE) of a frozen VLM [38]. By projecting
region features to pseudo words, BARON naturally allows
TE to effectively represent the co-occurring semantic con-
cepts and understand the whole scene. To retain the spatial
information of the region boxes, BARON projects the box
shape and box center position into embeddings and add to
the pseudo words before feeding them to TE.
To train BARON , we align the bag-of-regions embed-
dings to the teacher embeddings, which are obtained by
feeding the image crops that enclose the bag of regions to
the image encoder (IE) of the VLM. We adopt a contrastive
learning approach [46] to learn the pseudo words and the
bag-of-regions embeddings. Consistent with the VLMs’ pre-
training ( e.g., CLIP [38]), the contrastive loss pulls close
corresponding student (the detector) and teacher (IE) embed-
ding pairs and pushes away non-corresponding pairs.
We conduct extensive experiments on two challeng-
ing benchmarks, OV-COCO and OV-LVIS. The proposed
method consistently outperforms existing state-of-the-art
methods [10, 52, 58] in different settings. Combined with
Faster R-CNN, BARON achieves a 34.0 (4.6 increase) box
AP50of novel categories on OV-COCO and 22.6 (2.8 in-
crease) mask mAP of novel categories on OV-LVIS. It is
noteworthy that BARON can also distill knowledge from
caption supervision – it achieves 32.7 box AP 50of novel cat-
egories on OV-COCO, outperforming previous approaches
that use COCO caption [13, 53, 56, 58].
|
Xu_Generating_Features_With_Increased_Crop-Related_Diversity_for_Few-Shot_Object_Detection_CVPR_2023 | Abstract
Two-stage object detectors generate object proposals
and classify them to detect objects in images. These pro-
posals often do not contain the objects perfectly but overlap
with them in many possible ways, exhibiting great variabil-
ity in the difficulty levels of the proposals. Training a ro-
bust classifier against this crop-related variability requires
abundant training data, which is not available in few-shot
settings. To mitigate this issue, we propose a novel vari-
ational autoencoder (VAE) based data generation model,
which is capable of generating data with increased crop-
related diversity. The main idea is to transform the latent
space such latent codes with different norms represent dif-
ferent crop-related variations. This allows us to generate
features with increased crop-related diversity in difficulty
levels by simply varying the latent norm. In particular, each
latent code is rescaled such that its norm linearly correlates
with the IoU score of the input crop w.r.t. the ground-truth
box. Here the IoU score is a proxy that represents the dif-
ficulty level of the crop. We train this VAE model on base
classes conditioned on the semantic code of each class and
then use the trained model to generate features for novel
classes. In our experiments our generated features con-
sistently improve state-of-the-art few-shot object detection
methods on the PASCAL VOC and MS COCO datasets.
| 1. Introduction
Object detection plays a vital role in many computer vi-
sion systems. However, training a robust object detector
often requires a large amount of training data with accurate
bounding box annotations. Thus, there has been increas-
ing attention on few-shot object detection (FSOD), which
learns to detect novel object categories from just a few an-
notated training samples. It is particularly useful for prob-
lems where annotated data can be hard and costly to ob-
tain such as rare medical conditions [31, 41], rare animal
species [20, 44], satellite images [2, 19], or failure cases in
bird 54%
cow 22%bird 55%bird 57%Easy CropHard Crop(a) DeFRCN [33] (b) Ours
Figure 1. Robustness to different object crops of the same ob-
ject instance . (a) The classifier head of the state-of-the-art FSOD
method [33] classifies correctly a simple crop of the bird but mis-
classifies a hard crop where some parts are missing. (b) Our
method can handle this case since it is trained with additional gen-
erated features with increased crop-related diversity. We show the
class with the highest confidence score.
autonomous driving systems [27, 28, 36].
For the most part, state-of-the-art FSOD methods are
built on top of a two-stage framework [35], which includes
a region proposal network that generates multiple image
crops from the input image and a classifier that labels these
proposals. While the region proposal network generalizes
well to novel classes, the classifier is more error-prone due
to the lack of training data diversity [40]. To mitigate this is-
sue, a natural approach is to generate additional features for
novel classes [12, 55, 57]. For example, Zhang et al. [55]
propose a feature hallucination network to use the varia-
tion from base classes to diversify training data for novel
classes. For zero-shot detection (ZSD), Zhu et al. [57] pro-
pose to synthesize visual features for unseen objects based
on a conditional variational auto-encoder. Although much
progress has been made, the lack of data diversity is still a
challenging issue for FSOD methods.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19713
Here we discuss a specific type of data diversity that
greatly affects the accuracy of FSOD algorithms. Specifi-
cally, given a test image, the classifier needs to accurately
classify multiple object proposals1that overlap the object
instance in various ways. The features of these image crops
exhibit great variability induced by different object scales,
object parts included in the crops, object positions within
the crops, and backgrounds. We observe a typical scenario
where the state-of-the-art FSOD method, DeFRCN [33],
only classifies correctly a few among many proposals over-
lapping an object instance of a few-shot class. In fact, dif-
ferent ways of cropping an object can result in features with
various difficulty levels. An example is shown in Figure 1a
where the image crop shown in the top row is classified cor-
rectly while another crop shown in the bottom row confuses
the classifier due to some missing object parts. In general,
the performance of the method on those hard cases is signif-
icantly worse than on easy cases (see section 5.4). However,
building a classifier robust against crop-related variation is
challenging since there are only a few images per few-shot
class.
In this paper, we propose a novel data generation method
to mitigate this issue. Our goal is to generate features
with diverse crop-related variations for the few-shot classes
and use them as additional training data to train the classi-
fier. Specifically, we aim to obtain a diverse set of features
whose difficulty levels vary from easy to hard w.r.t. how
the object is cropped.2To achieve this goal, we design our
generative model such that it allows us to control the diffi-
culty levels of the generated samples. Given a model that
generates features from a latent space, our main idea is to
enforce that the magnitude of the latent code linearly corre-
lates with the difficulty level of the generated feature, i.e.,
the latent code of a harder feature is placed further away
from the origin and vice versa. In this way, we can con-
trol the difficulty level by simply changing the norm of the
corresponding latent code.
In particular, our data generation model is based on
a conditional variational autoencoder (V AE) architecture.
The V AE consists of an encoder that maps the input to a
latent representation and a decoder that reconstructs the in-
put from this latent code. In our case, inputs to the V AE are
object proposal features, extracted from a pre-trained ob-
ject detector. The goal is to associate the norm (magnitude)
of the latent code with the difficulty level of the object pro-
posal. To do so, we rescale the latent code such that its norm
linearly correlates with the Intersection-Over-Union (IoU)
score of the input object proposal w.r.t. the ground-truth ob-
ject box. This IoU score is a proxy that partially indicates
the difficulty level: A high IoU score indicates that the ob-
1Note that an RPN typically outputs 1000 object proposals per image.
2In this paper, the difficulty level is strictly related to how the object is
cropped.ject proposal significantly overlaps with the object instance
while a low IoU score indicates a harder case where a part of
the object can be missing. With this rescaling step, we can
bias the decoder to generate harder samples by increasing
the latent code magnitude and vice versa. In this paper, we
use latent codes with different norms varying from small to
large to obtain a diverse set of features which can then serve
as additional training data for the few-shot classifier.
To apply our model to FSOD, we first train our V AE
model using abundant data from the base classes. The V AE
is conditioned on the semantic code of the input instance
category. After the V AE model is trained, we use the se-
mantic embedding of the few-shot class as the conditional
code to synthesize new features for the corresponding class.
In our experiments, we use our generated samples to fine-
tune the baseline few-shot object detector - DeFRCN [33].
Surprisingly, a vanilla conditional V AE model trained with
only ground-truth box features brings a 3.7%nAP50 im-
provement over the DeFRCN baseline in the 1-shot setting
of the PASCAL VOC dataset [4]. Note that we are the first
FSOD method using V AE-generated features to support the
training of the classifier. Our proposed Norm-V AE can fur-
ther improve this new state-of-the-art by another 2.1%,i.e.,
from 60% to62.1%. In general, the generated features from
Norm-V AE consistently improve the state-of-the-art few-
shot object detector [33] for both PASCAL VOC and MS
COCO [24] datasets.
Our main contributions can be summarized as follows:
• We show that lack of crop-related diversity in training
data of novel classes is a crucial problem for FSOD.
• We propose Norm-V AE, a novel V AE architecture that
can effectively increase crop-related diversity in diffi-
culty levels into the generated samples to support the
training of FSOD classifiers.
• Our experiments show that the object detectors trained
with our additional features achieve state-of-the-art
FSOD in both PASCAL VOC and MS COCO datasets.
|
Wang_Position-Guided_Text_Prompt_for_Vision-Language_Pre-Training_CVPR_2023 | Abstract
Vision-Language Pre-Training (VLP) has shown promis-
ing capabilities to align image and text pairs, facilitating
a broad variety of cross-modal learning tasks. However,
we observe that VLP models often lack the visual ground-
ing/localization capability which is critical for many down-
stream tasks such as visual reasoning. In this work, we pro-
pose a novel Position-guided Text Prompt (PTP) paradigm
to enhance the visual grounding ability of cross-modal mod-
els trained with VLP . Specifically, in the VLP phase, PTP
divides the image into N×Nblocks, and identifies the ob-
jects in each block through the widely used object detector
in VLP . It then reformulates the visual grounding task into
a fill-in-the-blank problem given a PTP by encouraging the
model to predict the objects in the given blocks or regress
the blocks of a given object, e.g. filling “[P]” or “[O]”
in a PTP “The block [P] has a [O]”. This mechanism im-
proves the visual grounding capability of VLP models and
thus helps them better handle various downstream tasks. By
introducing PTP into several state-of-the-art VLP frame-
works, we observe consistently significant improvements
across representative cross-modal learning model architec-
tures and several benchmarks, e.g. zero-shot Flickr30K
Retrieval (+4.8 in average recall@1) for ViLT [16] base-
line, and COCO Captioning (+5.3 in CIDEr) for SOTA
BLIP [19] baseline. Moreover, PTP achieves comparable
results with object-detector based methods [8, 23, 45], and
much faster inference speed since PTP discards its object
detector for inference while the later cannot.
| 1. Introduction
The vision-and-language pre-training (VLP) models like
CLIP [31], ALIGN [14] and CoCa [42] have greatly ad-
vanced the state-of-the-art performance of many cross-
modal learning tasks, e.g., visual question answering [4],
reasoning [35], and image captioning [1, 7]. Typically, a
generic cross-modal model is first pre-trained on large-scale
*Corresponding authors.
Figure 1. Comparison of three VLP learning frameworks and their
performance. (a) compares region feature based VLP (RF-VLP),
end-to-end VLP (E2E-VLP), and our position-guided text prompt
based VLP (PTP-VLP). Our PTP-VLP only needs about 15ms
for inference which is the same as E2E-VLP but is much faster
than RF-VLP. (b) On position-aware questions widely occurred in
many downstream tasks, with masked text and image input, RF-
VLP and PTP-VLP can well predict objects, while E2E-VLP can-
not pinpoint the position information of the object in the image.
image-caption data in a self-supervised fashion to see suf-
ficient data for better generalization ability, and then fine-
tuned on downstream tasks for adaptation. With remarkable
effectiveness, this pre-training-then-fine-tuning paradigm
of VLP models has dominated the multi-modality field.
In VLP, visual grounding is critical for many tasks as
observed in previous research [3, 40]. To model the posi-
tion information, traditional VLP models [3,23,45] (the top
of Fig. 1 (a)) employ a faster-rcnn [33] pre-trained on the
1600 classes Visual Genome [17] to extract salient region
features and bounding boxes. Then these models use both
the bounding box and object feature as input. In this way,
these models not only learn what objects are contained in
the salient region and where are these objects. However,
when using region features as input, the model pays atten-
tion to the items inside the bounding boxes and ignores the
contextual data outside of them [13]. More seriously, on
downstream task, these methods still need to use detectors
to extract objects, giving very slow inference speed.
To get rid of region feature for higher efficiency, recent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23242
works [13,16] (the middle of Fig. 1 (a)) adopt raw-pixel im-
age as input instead of region features, and train the model
with Image Text Matching [8] and Masked Language Mod-
eling [10] loss end-to-end. Despite their faster speed, these
models cannot well learn the object positions and also their
relations. As shown in Fig. 1 (b), we observe that a well-
trained ViLT model [16] well know what objects are in an
image. But this model does not learn the object positions
accurately. For example, it wrongly predicts “ the dog is
on the right of this image ”. However, during fine-tuning,
downstream tasks actually require the object position infor-
mation to comprehensively understand the image. Such a
gap largely impairs the performance on downstream tasks.
In this work, we aims to ease the position missing prob-
lem for these end-to-end models, and keep fast inference
time for downstream tasks at the same time. Inspired
by the recently prompt learning methods [15, 25, 32, 41],
we propose a novel and effective Position-guided Text
Prompt ( PTP)paradigm (the bottom of Fig. 1 (a)) for
cross-modality model pre-training. The key insight is that
by adding position-based co-referential markers in both im-
age and text, visual grounding can be reformulated into a
fill-in-the-blank problem, maximally simplify the learning
of object information. PTP grounds language expressions
in images through two components: 1) block tag genera-
tion, dividing images into N×Nblocks and identifying
objects, and 2) text prompt generation, placing query text
into a position-based template.
By bringing the position information into pre-training,
ourPTP enables strong visual grounding capabilities of
VLP models. At the same time, as we do not used ob-
ject detector for downstream tasks, we keep fast inference
time. Experimental results show that our method outper-
forms their counterparts by a large margin especially for
zero-shot setting. For example, our PTP-BLIP achieves
3.4% absolute accuracy gain over CoCa [42] in zero-shot
retrieval Recall@1 on coco dataset with much less training
data (4M vs. 3B) and a much smaller model (220M vs.
2.1B). In addition to the zero-shot task, we show that PTP
can achieve strong performance for object position guided
visual reasoning and the other common VLP tasks such as
visual question answering, and image captioning.
|
Wu_RIDCP_Revitalizing_Real_Image_Dehazing_via_High-Quality_Codebook_Priors_CVPR_2023 | Abstract
Existing dehazing approaches struggle to process real-
world hazy images owing to the lack of paired real data
and robust priors. In this work, we present a new paradigm
for real image dehazing from the perspectives of synthe-
sizing more realistic hazy data and introducing more ro-
bust priors into the network. Specifically, (1) instead of
adopting the de facto physical scattering model, we re-
think the degradation of real hazy images and propose a
phenomenological pipeline considering diverse degrada-
tion types. (2) We propose a RealImage Dehazing net-
work via high-quality Codebook Priors (RIDCP). Firstly, a
VQGAN is pre-trained on a large-scale high-quality dataset
to obtain the discrete codebook, encapsulating high-quality
priors (HQPs). After replacing the negative effects brought
by haze with HQPs, the decoder equipped with a novel
normalized feature alignment module can effectively utilize
high-quality features and produce clean results. However,
although our degradation pipeline drastically mitigates the
domain gap between synthetic and real data, it is still in-
tractable to avoid it, which challenges HQPs matching in
the wild. Thus, we re-calculate the distance when match-
ing the features to the HQPs by a controllable matching
operation, which facilitates finding better counterparts. We
provide a recommendation to control the matching based
on an explainable solution. Users can also flexibly adjust
the enhancement degree as per their preference. Extensive
experiments verify the effectiveness of our data synthesis
pipeline and the superior performance of RIDCP in real
image dehazing. Code and data are released at https://rq-
wu.github.io/projects/RIDCP.
| 1. Introduction
Image dehazing aims to recover clean images from their
hazy counterparts, which is essential for computational pho-
tography and high-level tasks [20, 32]. The hazy image for-
mulation is commonly described by a physical scattering
*Corresponding author
(a) Hazy input (b) DAD [33] (c) PSD [7]
(d) Ours
Figure 1. Visual comparisons on a typical hazy image. The pro-
posed method generates cleaner results than other two state-of-the-
art real image dehazing approaches. The enhancement degree of
our result can be flexibly adjusted by adopting different parame-
ters in the real-domain adaptation phase. The image with a golden
border is the result obtained under our recommended parameter.
model:
I(x) =J(x)t(x) +A(1−t(x)), (1)
where I(x)denotes the hazy image and J(x)is its corre-
sponding clean image. The variables Aandt(x)are the
global atmosphere light and transmission map, respectively.
The transmission map t(x) = eβd(x)depends on scene
depth d(x)and haze density coefficient β.
Given a hazy image, restoring its clean version is highly
ill-posed. To mitigate the ill-posedness of this problem, var-
ious priors, e.g., dark channel prior [16], color attenuation
prior [44], and color lines [12] have been proposed in exist-
ing traditional methods. Nevertheless, the statistical priors
cannot cover diverse cases in real-world scenes, leading to
suboptimal dehazing performance.
With the advent of deep learning, image dehazing has
achieved remarkable progress. Existing methods either
adopt deep networks to estimate physical parameters [5,21,
31] or directly restore haze-free images [10, 15, 27, 30, 40].
However, image dehazing neural networks perform lim-
ited generalization to real scenes, owing to the difficulty in
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22282
collecting large-scale yet perfectly aligned paired training
data andsolving the uncertainty of the ill-posed problem
without robust priors . Concretely, 1) collecting large-scale
and perfectly aligned hazy images with the clean counter-
part is incredibly difficult, if not impossible. Thus, most of
the existing deep models use synthetic data for training, in
which the hazy images are generated using Eq. (1), lead-
ing to the neglect of multiple degradation factors. There
are some real hazy image datasets [2, 3] with paired data,
but the size and diversity are insufficient. Moreover, these
datasets deviate from the hazy images captured in the wild.
These shortcomings inevitably decrease the capability of
deep models in real scenes. 2) Real image dehazing is a
highly ill-posed issue. Generally, addressing an uncertain
mapping problem often needs the support of priors. How-
ever, it is difficult to obtain robust priors that can cover the
diverse scenes of real hazy images, which also limits the
performance of dehazing algorithms. Recently, many stud-
ies for real image dehazing try to solve these two issues
by domain adaptation from the perspective of data genera-
tion [33,39] or priors guidance [7,23], but still cannot obtain
desirable results.
In this work, we present a new paradigm for real image
dehazing motivated by addressing the above two problems.
To obtain large-scale and perfectly aligned paired training
data, we rethink the degradation of hazy images by observ-
ing amounts of real hazy images and propose a novel data
generation pipeline considering multiple degradation fac-
tors. In order to solve the uncertainty of the ill-posed issue,
we attempt to train a VQGAN [11] on high-quality images
to extract more robust high-quality priors (HQPs). The VQ-
GAN only learns high-quality image reconstruction, so it
naturally contains the robust HQPs that can help hazy fea-
tures jump to the clean domain. The observation in Sec. 4.1
further verifies our motivation. Thus, we propose the Real
Image Dehazing network via high-quality Codebook Priors
(RIDCP). The codebook and decoder of VQGAN are fixed
to provide HQPs. Then, RIDCP is equipped with an en-
coder that helps find the correct HQPs, and a new decoder
that utilizes the features from the fixed decoder and pro-
duces the final result. Moreover, we propose a novel Nor-
malized Feature Alignment (NFA) that can mitigate the dis-
tortion and balance the features for better fusion.
In comparison to previous methods [6, 14, 43] that intro-
duce codebook for image restoration, we further design a
unique real domain adaptation strategy based on the char-
acteristics of VQGAN and the statistical results. Intuitively,
we propose Controllable HQPs Matching (CHM) operation
that replaces the nearest-neighbour matching by imposing
elaborate-designed weights on the distances between fea-
tures and HQPs during the inference phase. The weights are
determined by a controllable parameter αand the statistical
distribution gap of HQPs activation in Sec. 4.3. By adjust-ingα, the distribution of HQPs activation can be shifted.
Moreover, we present a theoretically feasible solution to
obtain the optimal αby minimizing the Kullback-Leibler
Divergence of two probability distributions. More signifi-
cantly, the value of αcan be visually reflected as the en-
hancement degree as shown in Figure 1(d), and users are al-
lowed to adjust the dehazing results as per their preference.
Our CHM is effective, flexible, and explainable.
Compared with the state-of-the-art real image dehazing
methods, e.g., DAD [33] and PSD [7], only the proposed
RIDCP can effectively process the hazy images captured
in the wild while generating adjustable results, which are
shown in Figure 1. The contributions of our work can be
summarized as follows.
• We present a new paradigm to push the frontier of deep
learning-based image dehazing towards real scenes.
• We are the first to leverage the high-quality codebook
prior in the real image dehazing task. The controllable
HQPs matching operation is proposed to overcome the
gap between synthetic and real domains and produce
adjustable results.
• We re-formulate the degradation model of real hazy
images and propose a phenomenological degradation
pipeline to simulate the hazy images captured in the
wild.
|
Xu_DisCoScene_Spatially_Disentangled_Generative_Radiance_Fields_for_Controllable_3D-Aware_Scene_CVPR_2023 | Abstract
Existing 3D-aware image synthesis approaches mainly
focus on generating a single canonical object and show
limited capacity in composing a complex scene containing
a variety of objects. This work presents DisCoScene: a 3D-
aware generative model for high-quality and controllable
scene synthesis. The key ingredient of our method is a
very abstract object-level representation ( i.e., 3D bounding
boxes without semantic annotation) as the scene layout
prior, which is simple to obtain, general to describe various
scene contents, and yet informative to disentangle objects
and background. Moreover, it serves as an intuitive user
control for scene editing. Based on such a prior, the
proposed model spatially disentangles the whole scene into
object-centric generative radiance fields by learning on
only 2D images with the global-local discrimination. Our
model obtains the generation fidelity and editing flexibility
of individual objects while being able to efficiently compose
objects and the background into a complete scene. We
demonstrate state-of-the-art performance on many scene
datasets, including the challenging Waymo outdoor dataset.
Project page can be found here.
| 1. Introduction
3D-consistent image synthesis from single-view 2D data
has become a trendy topic in generative modeling. Recent
approaches like GRAF [40] and Pi-GAN [5] introduce
3D inductive bias by taking neural radiance fields [1,
28, 29, 36, 38] as the underlying representation, gaining
the capability of geometry modeling and explicit camera
control. Despite their success in synthesizing individual
objects ( e.g., faces, cats, cars), they struggle on scene
images that contain multiple objects with non-trivial layouts
and complex backgrounds. The varying quantity and large
*Work partially done during internships at Snap Inc.
†Corresponding author.diversity of objects, along with the intricate spatial arrange-
ment and mutual occlusions, bring enormous challenges,
which exceed the capacity of the object-level generative
models [4, 13, 15, 33, 45, 46, 60].
Recent efforts have been made towards 3D-aware scene
synthesis. Despite the encouraging progress, there are still
fundamental drawbacks. For example, Generative Scene
Networks (GSN) [8] achieve large-scale scene synthesis by
representing the scene as a grid of local radiance fields and
training on 2D observations from continuous camera paths.
However, object-level editing is not feasible due to spatial
entanglement and the lack of explicit object definition. On
the contrary, GIRAFFE [32] explicitly composites object-
centric radiance fields [16, 34, 56, 63] to support object-
level control. Yet, it works poorly on challenging datasets
containing multiple objects and complex backgrounds due
to the absence of proper spatial priors.
To achieve high-quality and controllable scene synthesis,
the scene representation stands out as one critical design
focus. A well-structured scene representation can scale
up the generation capability and tackle the aforementioned
challenges. Imagine, given an empty apartment and a
furniture catalog, what does it take for a person to arrange
the space? Would people prefer to walk around and throw
things here and there, or instead figure out an overall
layout and then attend to each location for the detailed
selection? Obviously, a layout describing the arrangement
of each furniture in the space substantially eases the scene
composition process [17, 26, 58]. From this vantage point,
here comes our primary motivation — an abstract object-
oriented scene representation, namely a layout prior , could
facilitate learning from challenging 2D data as a lightweight
supervision signal during training and allow user interaction
during inference. More specifically, to make such a prior
easy to obtain and generalizable across different scenes, we
define it as a set of object bounding boxes without semantic
annotation, which describes the spatial composition of ob-
jects in the scene and supports intuitive object-level editing.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4402
In this work, we present DisCoScene , a novel 3D-aware
generative model for complex scenes. Our method allows
for high-quality scene synthesis on challenging datasets
and flexible user control of both the camera and scene
objects. Driven by the aforementioned layout prior , our
model spatially disentangles the scene into compositable
radiance fields which are shared in the same object-centric
generative model. To make the best use of the prior as a
lightweight supervision during training, we propose global-
local discrimination which attends to both the whole scene
and individual objects to enforce spatial disentanglement
between objects and against the background. Once the
model is trained, users can generate and edit a scene by
explicitly controlling the camera and the layout of objects’
bounding boxes. In addition, we develop an efficient render-
ing pipeline tailored for the spatially-disentangled radiance
fields, which significantly accelerates object rendering and
scene composition for both training and inference stages.
Our method is evaluated on diverse datasets, including
both indoor and outdoor scenes. Qualitative and quanti-
tative results demonstrate that, compared to existing base-
lines, our method achieves state-of-the-art performance in
terms of both generation quality and editing capability.
Tab. 1 compares DisCoScene with relevant works. it is
worth noting that, to the best of our knowledge, DisCoScene
stands as the first method that achieves high-quality 3D-
aware generation on challenging datasets like W AYMO [48],
while enabling interactive object manipulation.
|
Wei_Focused_and_Collaborative_Feedback_Integration_for_Interactive_Image_Segmentation_CVPR_2023 | Abstract
Interactive image segmentation aims at obtaining a seg-
mentation mask for an image using simple user annotations.
During each round of interaction, the segmentation result
from the previous round serves as feedback to guide the
user’s annotation and provides dense prior information for
the segmentation model, effectively acting as a bridge be-
tween interactions. Existing methods overlook the impor-
tance of feedback or simply concatenate it with the orig-
inal input, leading to underutilization of feedback and an
increase in the number of required annotations. To address
this, we propose an approach called Focused and Collabo-
rative Feedback Integration (FCFI) to fully exploit the feed-
back for click-based interactive image segmentation. FCFI
first focuses on a local area around the new click and cor-
rects the feedback based on the similarities of high-level
features. It then alternately and collaboratively updates the
feedback and deep features to integrate the feedback into the
features. The efficacy and efficiency of FCFI were validated
on four benchmarks, namely GrabCut, Berkeley, SBD, and
DAVIS. Experimental results show that FCFI achieved new
state-of-the-art performance with less computational over-
head than previous methods. The source code is available at
https://github.com/veizgyauzgyauz/FCFI .
| 1. Introduction
Interactive image segmentation aims to segment a target
object in an image given simple annotations, such as bound-
ing boxes [17, 32, 37, 38, 40], scribbles [1, 10, 18], extreme
points [2,27,29,42], and clicks [6,15,23,24,34,39]. Due to
its inherent characteristic, i.e., interactivity, it allows users
to add annotations and receive refined segmentation results
iteratively. Unlike semantic segmentation, interactive im-
age segmentation can be applied to unseen categories (cate-
gories that do not exist in the training dataset), demonstrat-
ing its generalization ability. Additionally, compared with
instance segmentation, interactive segmentation is specific
Feed back
AnnotationFeedback 𝑀𝑡−1
Image 𝐼
and
Clicks 𝒫
Segmentation
Output 𝑀𝑡(a) Independent Segmentation
𝐼,𝒫User interaction
Feed back
Segmentation
Output 𝑀𝑡(b) Feedback -as-Input Segmentation
𝑀𝑡−1𝐼,𝒫
Segmentation
Output 𝑀𝑡(c) Deep Feedback -Integrated Segmentation (Ours)
𝐼,𝒫
𝑀𝑡−1
IoU=93.88% IoU=81.19%
IoU=93.09%
IoU=95.10%
Figure 1. An overview of the interactive system and the com-
parison among (a) independent segmentation [24], (b) feedback-
as-input segmentation [35], and (c) deep feedback-integrated seg-
mentation. See the description in Sec. 1. Throughout this paper,
green/red clicks represent foreground/background annotations.
since it only localizes the annotated instance. Owing to
these advantages, interactive image segmentation is a pre-
liminary step for many applications, including image edit-
ing and image composition. This paper specifically focuses
on interactive image segmentation using click annotations
because clicks are less labor-intensive to obtain than other
types of annotations.
The process of interaction is illustrated in Fig. 1. Given
an image, users start by providing one or more clicks to
label background or foreground pixels. The segmentation
model then generates an initial segmentation mask for the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18643
target object based on the image and the clicks. If the seg-
mentation result is unsatisfactory, users can continue to in-
teract with the segmentation system by marking new clicks
to indicate wrongly segmented regions and obtain a new
segmentation mask. From the second round of interaction,
the segmentation result of the previous interaction - referred
to as feedback in this paper - will be fed back into the cur-
rent interaction. This feedback is instrumental in user label-
ing and provides the segmentation model with prior infor-
mation, which can facilitate convergence and improve the
accuracy of segmentation [35].
Previous methods have made tremendous progress in
interactive image segmentation. Some of them, such as
[24,26,35], focused on finding useful ways to encode anno-
tated clicks, while others, like [5,15,19,21,34], explored ef-
ficient neural network architectures to fully utilize user an-
notations. However, few methods have investigated how to
exploit informative segmentation feedback. Existing meth-
ods typically treated each round of interaction independent
[5, 12, 15, 19, 24, 26] or simply concatenated feedback with
the initial input [6,23,25,34,35]. The former approach (Fig.
1(a)) failed to leverage the prior information provided by
feedback, resulting in a lack of consistency in the segmen-
tation results generated by two adjacent interactions. In the
latter case (Fig. 1(b)), feedback was only directly visible to
the first layer of the network, and thus the specific spatial
and semantic information it carried would be easily diluted
or even lost through many convolutional layers, similar to
the case of annotated clicks [12].
In this paper, we present Focused and Collaborative
Feedback Integration (FCFI) to exploit the segmentation
feedback for click-based interactive image segmentation.
FCFI consists of two modules: a Focused Feedback Cor-
rection Module (FFCM) for local feedback correction and a
Collaborative Feedback Fusion Module (CFFM) for global
feedback integration into deep features. Specifically, the
FFCM focuses on a local region centered on the new an-
notated click to correct feedback. It measures the feature
similarities between each pixel in the region and the click.
The similarities are used as weights to blend the feedback
and the annotated label. The CFFM adopts a collabora-
tive calibration mechanism to integrate the feedback into
deep layers (Fig. 1(c)). First, it employs deep features to
globally update the corrected feedback for further improv-
ing the quality of the feedback. Then, it fuses the feed-
back with deep features via a gated residual connection.
Embedded with FCFI, the segmentation network leveraged
the prior information provided by the feedback and outper-
formed many previous methods.
|
Wu_Unsupervised_Visible-Infrared_Person_Re-Identification_via_Progressive_Graph_Matching_and_Alternate_CVPR_2023 | Abstract
Unsupervised visible-infrared person re-identification is
a challenging task due to the large modality gap and theunavailability of cross-modality correspondences. Cross-modality correspondences are very crucial to bridge themodality gap. Some existing works try to mine cross-
modality correspondences, but they focus only on local in-
formation. They do not fully exploit the global relationshipacross identities, thus limiting the quality of the mined cor-respondences. Worse still, the number of clusters of the twomodalities is often inconsistent, exacerbating the unrelia-bility of the generated correspondences. In response, we
devise a Progressive Graph Matching method to globallymine cross-modality correspondences under cluster imbal-
ance scenarios. PGM formulates correspondence miningas a graph matching process and considers the global infor-
mation by minimizing the global matching cost, where the
matching cost measures the dissimilarity of clusters. Be-
sides, PGM adopts a progressive strategy to address theimbalance issue with multiple dynamic matching processes.
Based on PGM, we design an Alternate Cross ContrastiveLearning (ACCL) module to reduce the modality gap withthe mined cross-modality correspondences, while mitigat-
ing the effect of noise in correspondences through an alter-
nate scheme. Extensive experiments demonstrate the reli-ability of the generated correspondences and the effective-ness of our method.
| 1. Introduction
The target of visible-infrared person re-identification
(VI-ReID) [ 23,25,38,51,52] is to recognize the same
person across a set of visible/infrared gallery images when
given an image from another modality. This task has at-
*Corresponding Author: Mang YeFeature distribution
ܦ௧ܦب௧ܦ௧ܦ௧
(a) Illustration of the problemCorrespondences for
Final matching results
(b) Illustration of the existing work
Unmatched/ Matched Cluster/Cost matrix
Globally optimal correspondence࣡࣡ோModality
graph
Cost matrix࣡ோ
࣡ᇲReconstructed
modality graph
Remaining nodesVisible / Infrared Cluster/
(c) Illustration of the proposed Progressive Graph Matching
Figure 1. Idea illustration. Different colors indicate different
pedestrians. (a) illustrates the feature distribution of randomlyselected persons of SYSU-MM01. The cross-modality discrep-ancy is much larger than inter-class variance within each modality.(b) abstracts the existing solution. The locally closest unmatchedcross-modality cluster is treated as correspondence. Bottom of (b)indicates its two drawbacks: 1) it ignores the global informationamong different identities and 2) it ignores the cluster imbalanceissue across modalities and discards remaining nodes (
). (c) is
the progressive graph matching method. We utilize graph match-
ing to obtain the globally optimal correspondences and design a
progressive strategy to handle the cluster imbalance issue.
tracted extensive interest recently due to its significance in
night intelligent surveillance and public security. Many pro-
gresses [ 3,5,29,40,51] have been made in VI-ReID. How-
ever, these methods require well-annotated training setswhich are exhausting to obtain, so they are less applicable
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9548
in real scenarios. In light of this limitation, we attempt to
investigate an unsupervised solution for VI-ReID.
For unsupervised single-modality ReID, widely-studied
works [ 4,7,9,34,42,57] utilize cluster-based methods to
produce supervision signals in the homogeneous space.
However, in the visible-infrared heterogeneous space, theconsistency of features and semantics cannot be maintained
due to the large modality gap. Specifically, the cross-modality difference is much larger than the inter-class dis-crepancy within each modality (see Fig. 1a). Hence, we
cannot establish connections between the two modalities byadopting off-the-shelf clustering methods. However, cross-modality correspondences play an important role in bridg-ing the modality gap between two heterogeneous modal-
ities [ 25,29,40,51,52]. Without reliable cross-modality
correspondences, the model can hardly learn modality-invariant features.
Some efforts [ 22,33,45] have been made recently to find
cross-modality correspondences. However, most of the ex-
isting methods consider only local information and do not
take full advantage of the global relationship among dif-
ferent identities (see Fig. 1b). What’s worse, they are not
applicable to scenarios with cluster imbalance problems,
since some clusters cannot find their correspondences, hin-dering the later modality gap reduction process. To globallymine cross-modality correspondences under cluster imbal-
ance scenarios, we propose a Progressive Graph Matching(PGM) method. It is featured for two designs, i.e., 1) con-
necting the two modalities with graph matching and 2) ad-
dressing the imbalance issue with the progressive strategy.
First, we employ graph matching to fully utilize the rela-
tionship among different identities under global constraints
(see Fig. 1cleft). PGM formulates the cross-modality cor-
respondences mining process as a bipartite graph matchingproblem with each modality as a graph and each cluster as a
node. The matching cost between nodes is positively corre-lated with the distance of clusters. By minimizing the global
matching cost, graph matching is expected to generate more
reliable correspondences with global consideration. Graphmatching has been demonstrated to have an advantage in
unsupervised correspondence localization between two fea-
ture sets [ 6,35,44,49,50]. With this property, we are in-
spired to construct a graph for each modality and link the
same person across different modalities.
Second, we propose a progressive strategy to tackle the
imbalance problem. Basic graph matching cannot han-dle the cluster imbalance issue across modalities, which is
caused by camera variations within class. Instances of thesame person are sometimes split into different clusters [ 4,
57] and some clusters cannot find their cross-modality cor-
respondences (see Fig. 1c). This correspondence-missing
problem affects the further modality discrepancy decrease.In response, we propose to find the correspondence for eachcluster through multiple dynamic matching (see Fig. 1c
right). The subgraphs in the bipartite graph are dynami-cally updated according to the previous matching results un-til each cluster progressively finds its correspondence. With
the progressive strategy, different clusters with the sameperson ID could find the same cross-modality correspon-
dences. Therefore, these many-to-one matching results alle-viate the imbalance issue and also implicitly enhance intra-class compactness.
In addition, to fully exploit the mined cross-modality
correspondences, we design a novel Alternate Cross Con-
trastive Learning (ACCL) module. Inspired by supervised
methods like [ 23,25,
47], Cross Contrastive Learning (CCL)
reduces the modality discrepancy by pulling the instance
close to its corresponding cross-modality proxy and push-ing it away from other proxies. However, unlike the super-
vised setting, the cross-modality correspondences generatedby unsupervised methods are inevitably noisy, so directlycombining the two unidirectional metric losses ( visible to
infrared and infrared to visible ) may lead to rapid false “as-
sociation”. We propose to alternately use two unidirectional
metric losses so that positive cross-modality pairs can beassociated by stages. This alternate scheme mitigates theeffect of noise since the false positive pairs do not remain
for long. In an alternative way, the noise effect would be
reduced (as detailed in Sec. 3.3).
Our main contributions can be summarized as follows:
• We propose the PGM method to mine reliable cross-
modality correspondences for unsupervised VI-ReID.
We first build modality graph and perform graphmatching to consider global information among iden-
tities and devise a progressive strategy to make thematching process applicable to imbalanced clusters.
• We design ACCL to decrease the modality dispar-
ity, which promotes the learning of modality-invariant
information by gathering the instance to its corre-
sponding cross-modality proxy. The alternate updat-ing scheme is designed to mitigate the effect of noisy
cross-modality correspondences.
• Extensive experiments demonstrate that PGM method
provides relatively reliable cross-modality correspon-dences and our proposed method achieves significantimprovement in unsupervised VI-ReID.
|
Wang_Progressive_Disentangled_Representation_Learning_for_Fine-Grained_Controllable_Talking_Head_Synthesis_CVPR_2023 | Abstract
We present a novel one-shot talking head synthesis
method that achieves disentangled and fine-grained control
over lip motion, eye gaze&blink, head pose, and emotional
expression. We represent different motions via disentangled
latent representations and leverage an image generator to
synthesize talking heads from them. To effectively disen-
tangle each motion factor, we propose a progressive disen-
tangled representation learning strategy by separating the
factors in a coarse-to-fine manner, where we first extract
unified motion feature from the driving signal, and then iso-
late each fine-grained motion from the unified feature. We
leverage motion-specific contrastive learning and regress-
ing for non-emotional motions, and introduce feature-level
decorrelation and self-reconstruction for emotional expres-sion, to fully utilize the inherent properties of each motion
factor in unstructured video data to achieve disentangle-
ment. Experiments show that our method provides high
quality speech&lip-motion synchronization along with pre-
cise and disentangled control over multiple extra facial mo-
tions, which can hardly be achieved by previous methods.
| 1. Introduction
Talking head synthesis is an indispensable task for cre-
ating realistic video avatars and enables multiple applica-
tions such as visual dubbing, interactive live streaming, and
online meeting. In recent years, researchers have made
great progress in one-shot generation of vivid talking heads
by leveraging deep learning techniques. Corresponding
methods can be mainly divided into audio-driven talking
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17979
head synthesis and video-driven face reenactment. Audio-
driven methods focus more on accurate lip motion syn-
thesis from audio signals [9, 44, 52, 54]. Video-driven ap-
proaches [50,59] aim to faithfully transfer all facial motions
in the source video to target identities and usually treat these
motions as a unity without individual control.
We argue that a fine-grained and disentangled control
over multiple facial motions is the key to achieving lifelike
talking heads, where we can separately control lip motion,
head pose, eye motion, and expression, given correspond-
ing respective driving signals. This is not only meaningful
from the research aspect which is often known as the dis-
entangled representation learning but also has a great im-
pact on practical applications. Imagining in a real scenario,
where we would like to modify the eye gaze of an already
synthesized talking head, it could be costly if we cannot
solely change it but instead ask an actor to perform a com-
pletely new driving motion. Nevertheless, controlling all
these factors in a disentangled manner is very challenging.
For example, lip motions are highly tangled with emotions
by nature, whereas the mouth movement of the same speech
can be different under different emotions. There are also
insufficient annotated data for large-scale supervised learn-
ing to disentangle all these factors. As a result, existing
methods either cannot modify certain factors such as eye
gaze or expression [75, 76], or can only change them alto-
gether [26,32], or have difficulties providing precise control
over individual factors [26].
In this paper, we propose Progressive Disentangled
Fine-Grained Controllable Talking Head (PD-FGC) for
one-shot talking head generation with disentangled con-
trol over lip motion, head pose, eye gaze&blink, and emo-
tional expression1, where the control signal of lip motion
comes from audios, and all other motions can be individu-
ally driven by different videos. To this end, our intuition is
to learn disentangled latent representation for each motion
factor, and leverage an image generator to synthesize talk-
ing heads taking these latent representations as input. How-
ever, it is very challenging to disentangle all these factors
given only in-the-wild video data for training. Therefore,
we propose to fully utilize the inherent properties of each
motion within the video data with little help of existing prior
models. We design a progressive disentangled representa-
tion learning strategy to separate each factor control in a
coarse-to-fine manner based on their individual properties.
It consists of three stages:
1)Appearance and Motion Disentanglement . We first
learn appearance and motion disentanglement via data aug-
mentation and self-driving [6,75] to obtain a unified motion
feature that records all motions of the driving frame mean-
while excludes appearance information. It serves as a strong
1We define the emotional expression as the facial expression that ex-
cludes speech-related mouth movement and eye gaze&blink.starting point for further fine-grained disentanglement.
2)Fine-Grained Motion Disentanglement . Given the
unified motion feature, we learn individual motion repre-
sentation for lip motion, eye gaze&blink, and head pose,
via a carefully designed motion-specific contrastive learn-
ing scheme as well as the guidance of a 3D pose estima-
tor [14]. Intuitively, speech-only lip motion can be well
separated via learning shared information between the uni-
fied motion feature and the corresponding audio signal [75];
eye motions can be disentangled by region-level contrastive
learning that focuses on eye region only, and head pose can
be well defined by 3D rigid transformation.
3)Expression Disentanglement . Finally, we turn to the
challenging expression separation as the emotional expres-
sion is often highly tangled with other motions such as
mouth movement. We achieve expression disentanglement
via decorrelating it with other motion factors on a feature
level, which we find works incredibly well. An image gen-
erator is simultaneously learned for self-reconstruction of
the driving signals to learn the semantically-meaningful ex-
pression representation in a complementary manner.
In summary, our contributions are as follows: 1)We pro-
pose a novel one-shot and fine-grained controllable talk-
ing head synthesis method that disentangles appearance,
lip motion, head pose, eye blink&gaze, and emotional ex-
pression, by leveraging a carefully designed progressive
disentangled representation learning strategy. 2)Motion-
specific contrastive learning and feature-level decorrelation
are leveraged to achieve desired factor disentanglement. 3)
Trained on unstructured video data with limited guidance
from prior models, our method can precisely control di-
verse facial motions given different driving signals, which
can hardly be achieved by previous methods.
|
Xue_GarmentTracking_Category-Level_Garment_Pose_Tracking_CVPR_2023 | Abstract
Garments are important to humans. A visual system
that can estimate and track the complete garment pose
can be useful for many downstream tasks and real-world
applications. In this work, we present a complete pack-
age to address the category-level garment pose tracking
task: (1) A recording system VR-Garment , with which
users can manipulate virtual garment models in simula-
tion through a VR interface. (2) A large-scale dataset VR-
Folding , with complex garment pose configurations in ma-
nipulation like flattening and folding. (3) An end-to-end
online tracking framework GarmentTracking , which pre-
dicts complete garment pose both in canonical space and
task space given a point cloud sequence. Extensive ex-
periments demonstrate that the proposed GarmentTrack-
ing achieves great performance even when the garment
has large non-rigid deformation. It outperforms the base-
line approach on both speed and accuracy. We hope our
proposed solution can serve as a platform for future re-
search. Codes and datasets are available in https://
garment-tracking.robotflow.ai .
| 1. Introduction
Garments are one of the most important deformable ob-
jects in daily life. A vision system for garment pose estima-
tion and tracking can benefit downstream tasks like MR/AR
and robotic manipulation [7, 17]. The category-level gar-
ment pose estimation task is firstly introduced in Garment-
Nets [11], which aims to recover the full configuration of an
unseen garment from a single static frame. Unlike the non-
rigid tracking methods [9, 10, 16, 19, 27, 34, 35] which can
only recover the geometry of the visible regions, pose es-
timation task can also reconstruct the occluded parts of the
object. Another line of works [14, 15, 21, 22, 28–30] (non-
rigid 4D reconstruction which can reconstruct complete ob-
†Cewu Lu is the corresponding author, the member of Qing Yuan
Research Institute and MoE Key Lab of Artificial Intelligence, AI Institute,
Shanghai Jiao Tong University, China and Shanghai Qi Zhi Institute.ject geometry) cannot be directly applied on garments, since
they assume the object has a watertight geometry. In con-
trast, garments have thin structures with holes.
In this paper, we propose a new task called Category-
level Garment Pose Tracking , which extends the single-
frame pose estimation setting in [11] to pose tracking in
dynamic videos. Specifically, we focus on the pose track-
ing problem in garment manipulation ( e.g. flattening, fold-
ing). In this setting, we do not have the priors of the human
body like previous works for clothed humans [18,26,31,41].
Therefore, we must address the extreme deformation that
manipulated garments could undergo.
To tackle the garment pose tracking problem, we need
a dataset of garment manipulation with complete pose an-
notations. However, such a dataset does not exist so far to
the best of our knowledge. To build such a dataset, we turn
to a VR-based solution due to the tremendous difficulty of
garment pose annotation in the real world [10]. We first
create a real-time VR-based recording system named VR-
Garment . Then the volunteer can manipulate the garment
in a simulator through the VR interface. With VR-Garment,
we build a large-scale garment manipulation dataset called
VR-Folding . Compared to the single static garment config-
uration ( i.e. grasped by one point) in GarmentNets, our ma-
nipulation tasks include flattening andfolding , which con-
tain much more complex garment configurations. In total,
our VR-Folding dataset contains 9767 manipulation videos
which consist of 790K multi-view RGB-D frames with full
garment pose and hand pose annotations on four garment
categories selected from the CLOTH3D [8] dataset.
With the VR-Folding dataset, we propose an end-to-end
online tracking method called GarmentTracking to per-
form category-level garment pose tracking during manipu-
lation. For the garment pose modeling, we follow Garment-
Nets [11] to adopt the normalized object coordinate space
(NOCS) for each category. Nevertheless, tracking garment
pose raises new challenges compared to single-frame pose
estimation: (1) How to fuse inter-frame geometry and corre-
spondence information? (2) How to make the tracking pre-
diction robust to pose estimation errors? (3) How to achieve
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21233
æ ç
è é
ê ë
æLoad current garment èStart recording
êFlatten the garmentçGrab the garment
éSwing the garmentæ
èRGB Image
Depth Imageç
éMask
Mesh with NOCS label ëStop recording(a) The recording environment (b) The steps to record flattening (first-person view) (c) The re-rendered dataVR Headset
VR Gloves
æ ç
è éFigure 1. The pipeline of our Virtual Realty recording system (VR-Garment). (a) A volunteer needs to put on a VR headset and VR
gloves. (b) By following the guidance of a specially designed UI, the volunteer begins to collect data efficiently. (c) After recording, we
re-render multi-view RGB-D images with Unity [6] and obtain masks and deformed garment meshes with NOCS labels.
tracking in real-time? To address these challenges, we con-
duct GarmentTracking in three stages, namely NOCS pre-
dictor ,NOCS refiner , and warp field mapper . Firstly, it pre-
dicts per-frame features and fuses them for canonical co-
ordinate prediction. Then it refines the predicted canonical
coordinates and the geometry with a NOCS refiner to re-
duce the accumulated errors. Finally, it maps the prediction
in canonical space to the task space ( i.e. coordinate frame
of the input point cloud).
Since no previous work is designed for the tracking set-
ting, we use GarmentNets [11] as a single-frame prediction
baseline for comparison. We also perform extensive abla-
tive experiments to reveal the efficacy of our design choices.
Finally, we collect real-world data on garment manipula-
tion and show the qualitative results of our method. In our
design, we avoid the computationally expensive Marching
Cubes [25] for reconstructing the canonical mesh frame by
frame, so that we can achieve tracking at 15 FPS with an
RTX 3090 GPU ( 5 times faster than the baseline approach).
We summarize our contributions as follows:
1). We propose a VR-based garment manipulation
recording system named VR-Garment . It can synchronize
human operations into the simulator and collect garment
manipulation data.
2). We propose a large-scale garment manipulation
dataset named VR-Folding for pose tracking. During ma-
nipulation, garments exhibit diverse configurations.
3). We propose a real-time end-to-end framework named
GarmentTracking for category-level garment pose track-
ing. It can serve as a strong baseline for further research.
We also demonstrate its generalization ability to real-world
garment recordings with models trained by simulated data. |
Wang_Omni_Aggregation_Networks_for_Lightweight_Image_Super-Resolution_CVPR_2023 | Abstract
While lightweight ViT framework has made tremendous
progress in image super-resolution, its uni-dimensional
self-attention modeling, as well as homogeneous aggre-
gation scheme, limit its effective receptive field (ERF) to
include more comprehensive interactions from both spa-
tial and channel dimensions. To tackle these drawbacks,
this work proposes two enhanced components under a
new Omni-SR architecture. First, an Omni Self-Attention
(OSA) block is proposed based on dense interaction princi-
ple, which can simultaneously model pixel-interaction from
both spatial and channel dimensions, mining the potential
correlations across omni-axis (i.e., spatial and channel).
Coupling with mainstream window partitioning strategies,
OSA can achieve superior performance with compelling
computational budgets. Second, a multi-scale interaction
scheme is proposed to mitigate sub-optimal ERF (i.e., pre-
mature saturation) in shallow models, which facilitates lo-
cal propagation and meso-/global-scale interactions, ren-
dering an omni-scale aggregation building block. Extensive
experiments demonstrate that Omni-SR achieves record-
high performance on lightweight super-resolution bench-
marks (e.g., 26.95dB@Urban100 ×4with only 792K pa-
rameters). Our code is available at https://github.
com/Francis0625/Omni-SR .
| 1. Introduction
Image super-resolution (SR) is a long-standing low-level
problem that aims to recover high-resolution (HR) images
from degraded low-resolution (LR) inputs. Recently, vi-
sion transformer [14, 51] based (i.e., ViT-based) SR frame-
works [5, 31] have emerged, showing significant perfor-
mance gains compared to previously dominant Convolu-
tional Neural Networks (CNNs) [66]. However, most at-
tempts [31] are devoted to improving the large-scale ViT-
based models, while the development of lightweight ViTs
(typically, less than 1M parameters) remains fraught with
*Equal Contribution.
†Corresponding author: Bingbing Ni.
3615N
,WHUDWLRQ3615
636$
&$6$
2PQL6$
5 02 647 792
3DUDPHWHUV.
636$
&$6$2PQL6$3615
N636$
&$6$2PQL6$
Spatial SA Channel SA Omni SA
Figure 1. Typical self-attention schemes [31,59] can only perform
uni-dimensional (e.g., spatial-only) interactions, and.
difficulties. This paper focuses on boosting the restoration
performance of lightweight ViT-based frameworks.
Two difficulties hinder the development of lightweight
ViT-based models: 1) Uni-dimensional aggregation oper-
ators (i.e., spatial [31] only or channel [59] only) impris-
ons the full potential of self-attention operators. Contem-
porary self-attention generally realizes the interaction be-
tween pixels by calculating the cross-covariance of the spa-
tial direction (i.e., width and height) and exchanges context
information in a channel-separated manner. This interac-
tion scheme ignores the explicit use of channel informa-
tion. However, recent evidences [59] and our practice show
that self-attention in the channel dimension (i.e., compu-
tationally more compact than spatial self-attention) is also
crucial in low-level tasks. 2) Homogeneous aggregation
schemes (i.e., Simple hierarchical stacking of single opera-
tors, e.g., convolution, self-attention) neglect abundant tex-
ture patterns of multi-scales, which is urgently needed in
SR task. Specifically, a single operator is only sensitive to
information of one scale [6,12], e.g., self-attention is sensi-
tive to long-term information and pays little attention to lo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22378
cal information. Additionally, stacking of homogeneous op-
erators proves to be inefficient and suffers from premature
saturation of the interaction range [8], which is reflected as
a suboptimal effective receptive field. The above problem
is exacerbated in lightweight models because lightweight
models cannot stack enough layers.
In order to solve the above problems and pursue higher
performance, this work proposes a novel omni-dimension
feature aggregation scheme called Omni Self-Attention
(OSA) exploiting both spatial and channel axis informa-
tion in a simultaneous manner (i.e., extends the interaction
into three-dimensional space), which offers higher-order
receptive field information, as shown in Figure 1. Un-
like scalar-based (a group of important coefficient) chan-
nel interaction [19], OSA enables comprehensive informa-
tion propagation and interaction by cascading computation
of the cross-covariance matrices between spatial/channel
dimensions. The proposed OSA module can be plugged
into any mainstream self-attention variants (e.g., Swin [35],
Halo [50]), which provides a finer granularity of important
encoding (compared to the vanilla channel attention [19]),
achieving a perceptible improvement in contextual aggre-
gation capabilities. Furthermore, a multi-scale hierarchical
aggregation block, named Omni-Scale Aggregation Group
(i.e., OSAG for short), is presented to achieve tailored en-
coding of varying scales of texture patterns. Specifically,
OSAG builds three cascaded aggregators: local convolution
(for local details), meso self-attention (focusing on mid-
scale pattern processing), and global self-attention (pursu-
ing global context understanding), rendering an omni-scale
(i.e., local-/meso-/global-scale simultaneously) feature ex-
traction capability. Compared to the homogenized feature
extraction schemes [28, 31], our OSAG is able to mine
richer information producing features with higher informa-
tion entropy. Coupling with the above two designs, we es-
tablish a new ViT-based framework for lightweight super-
resolution, called Omni-SR , which exhibits superior restora-
tion performance as well as covers a larger interaction range
while maintaining an attractive model size, i.e., 792K.
We extensively experiment with the proposed frame-
work with both qualitative and quantitative evaluations on
mainstream open-source image super-resolution datasets.
It is demonstrated that our framework achieves state-of-
the-art performance at the lightweight model scale (e.g.,
Urban100 ×4:26.95dB, Manga109 ×4:31.50dB). More
importantly, compared to existing ViT-based super-solution
frameworks, our framework shows superior optimization
properties (e.g., convergence speed, smoother loss land-
scape), which endow our model with better robustness.
|
Wu_Logical_Consistency_and_Greater_Descriptive_Power_for_Facial_Hair_Attribute_CVPR_2023 | Abstract
Face attribute research has so far used only simple bi-
nary attributes for facial hair; e.g., beard / no beard. We
have created a new, more descriptive facial hair annotation
scheme and applied it to create a new facial hair attribute
dataset, FH37K. Face attribute research also so far has not
dealt with logical consistency and completeness. For ex-
ample, in prior research, an image might be classified as
both having no beard and also having a goatee (a type of
beard). We show that the test accuracy of previous classifi-
cation methods on facial hair attribute classification drops
significantly if logical consistency of classifications is en-
forced. We propose a logically consistent prediction loss,
LCPLoss, to aid learning of logical consistency across at-
tributes, and also a label compensation training strategy
to eliminate the problem of no positive prediction across
a set of related attributes. Using an attribute classifier
trained on FH37K, we investigate how facial hair affects
face recognition accuracy, including variation across de-
mographics. Results show that similarity and difference in
facial hairstyle have important effects on the impostor and
genuine score distributions in face recognition. The code is
at https://github.com/HaiyuWu/LogicalConsistency.
| 1. Introduction
Facial attributes have been widely used in face match-
ing/recognition [8, 14, 30, 31, 37, 43], face image retrieval
[34, 39], re-identification [42, 44, 45], training GANs [15,
16, 25, 33] for generation of synthetic images, and other ar-
eas. As an important feature of the face, facial hairstyle
does not attract enough attention as a research area. One
reason is that current datasets have only simple binary at-
tributes to describe facial hair, and this does not support
deeper investigation. This paper introduces a more descrip-
tive set of facial hair attributes, representing dimensions of
the area of face covered, the length of the hair, and con-
nectedness of beard/mustache/sideburns. We also propose
a logically consistent predictions loss function, LCPLoss,
and label compensation strategy to enhance the logical con-
Figure 1. (1) What is the best way to define the facial hair styles?
(2)How does the facial hair classifier perform in the real-world
cases? (3)How does the face matcher treat the same (different)
person with different (same) beard styles? This paper presents our
approaches and answers for these questions.
sistency of the predictions. We illustrate the use of this new,
richer set of facial hair annotations by investigating the ef-
fect of beard area on face recognition accuracy across de-
mographic groups. Contributions of this work include:
• A richer scheme of facial hair attributes is defined and
annotations are created for the FH37K dataset. The at-
tributes describe facial hair features along dimensions
of area of the face covered, length of hair and connect-
edness of elements of the hair (See Sec. 2 and 4.1).
• The logical consistency of classifications of the facial
hair attribute classifier is analyzed. We show that the
proposed LCPLoss and label compensation strategy
can significantly reduce the number of logically incon-
sistent predictions (See Section 5 and Section 6.1).
• We analyze the effect of the beard area on face recogni-
tion accuracy. Larger difference in beard area between
a pair of images matched for recognition decreases the
similarity value of both impostor and genuine image
pairs. Interestingly, the face matchers perform dif-
ferently across demographic groups when image pairs
have the same beard area. (See Section 6.2)
2. Facial Hair In Face Attribute Datasets
For a broad discussion of face attribute classification re-
search, see the recent survey by Zheng et al [55]. Here, we
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8588
# of images # of ids # of facial hair attributes Area Length CNDN Ec
Berkeley Human8,053 - 0 0 0 0 ✗Attributes [10]⋆
Attributes 25K [54] 24,963 24,963 0 0 0 0 ✗
FaceTracer [29]⋆15,000 15,000 1 (Mustache) 0 0 0 ✗
Ego-Humans [49] 2,714 - 1 (Facial hair) 0 0 0 ✗
CelebA [36]⋆202,599 |
Wang_Context-Aware_Pretraining_for_Efficient_Blind_Image_Decomposition_CVPR_2023 | Abstract
In this paper, we study Blind Image Decomposition
(BID), which is to uniformly remove multiple types of degra-
dation at once without foreknowing the noise type. There
remain two practical challenges: (1) Existing methods typ-
ically require massive data supervision, making them in-
feasible to real-world scenarios. (2) The conventional
paradigm usually focuses on mining the abnormal pat-
tern of a superimposed image to separate the noise, which
de facto conflicts with the primary image restoration task.
Therefore, such a pipeline compromises repairing efficiency
and authenticity. In an attempt to solve the two chal-
lenges in one go, we propose an efficient and simplified
paradigm, called Context-aware Pretraining ( CP), with two
pretext tasks: mixed image separation and masked im-
age reconstruction. Such a paradigm reduces the annota-
tion demands and explicitly facilitates context-aware fea-
ture learning. Assuming the restoration process follows a
structure-to-texture manner, we also introduce a Context-
aware Pretrained network ( CPNet ). In particular, CPNet
contains two transformer-based parallel encoders, one in-
formation fusion module, and one multi-head prediction
module. The information fusion module explicitly utilizes
the mutual correlation in the spatial-channel dimension,
while the multi-head prediction module facilitates texture-
guided appearance flow. Moreover, a new sampling loss
along with an attribute label constraint is also deployed to
make use of the spatial context, leading to high-fidelity im-
age restoration. Extensive experiments on both real and
synthetic benchmarks show that our method achieves com-
petitive performance for various BID tasks.
| 1. Introduction
Different from traditional image restoration, Blind Im-
age Decomposition (BID) aims to remove arbitrary degra-
∗Work done during an internship at Baidu.
†Corresponding author: Yi Yang.
Figure 1. The prototype of restoration frameworks: (a) Traditional
methods [61,62] require task-specific network design and separate
training. (b) All-in-one [28] relies on tedious one-by-one training
of multiple heads. (c) TransWeather [49] is ad-hoc to remove one
specific noise at a time. (d) IPT [4] extends (c) with a reusable
pretrained middle-level transformer, which only works on specific
tasks. (e) BIDeN [17] returns to the complex multiple decoders
and demands dense supervision from noise labels. (f) The pro-
posed method studies removing the general noise combinations by
harnessing the prior knowledge learned during pretraining, which
largely simplifies the pipeline. (Please zoom in to see the details.)
dation combinations without knowing noise type and mix-
ing mechanism. This task is challenging due to the huge
gap between different noises and varying mixing patterns
as amalgamated noises increase. Although many exist-
ing methods [30, 54, 61, 62] have been proposed as generic
restoration networks, they are still fine-tuned on the indi-
vidual datasets and do not use a single general model for all
the noise removal tasks (Figure 1 (a)). All-in-one [28] fur-
ther proposes a unified model across 3 datasets, but it still
uses computationally complex separate encoders (Figure 1
(b)). To ameliorate this issue, TransWeather [49] introduces
a single encoder-single decoder transformer network for
multi-type adverse weather removal, yet this method is de-
signed to restore one specific degradation at one time (Fig-
ure 1 (c)), which does not conform to the BID setting.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18186
Figure 2. Comparison between the proposed method and existing
approaches in terms of Peak Signal-to-Noise Ratio (PSNR) per-
formance and the parameter numbers. We can observe that a sin-
gle model instance of our method significantly outperforms both
single-task and multi-task networks with much fewer parameters.
Recently, Han et al. [17] propose a blind image decom-
position network (BIDeN) that first explores a feasible solu-
tion for BID task. This method intuitively considers a cor-
rupted image to be composed of a series of superimposed
images, and proposes a CNN-based network to separate
these components. In particular, BIDeN designs a multi-
scale encoder combined with separate decoders. However,
this network still requires tedious training as there are multi-
ple decoders for each component including the noise masks,
which compromises the primary restoration task (Figure 1
(e)). Moreover, this deeply-learned model is data-hungry,
considering the task-specific data can be limited under cer-
tain circumstances ( e.g., medical and satellite images). Be-
sides, various inconsistent factors ( e.g., camera parameters,
illumination, and weather) can further perturb the distribu-
tion of the captured data for training. In an attempt to ad-
dress the data limitation, IPT [4] first explores the pretrain-
ing scheme on several image processing tasks. However,
since the middle transformer does not learn the shared rep-
resentative features, IPT is still limited to task-specific fine-
tuning with complex multiple heads/tails and fails to BID
mission (Figure 1 (d)).
This paper aims at addressing the aforementioned chal-
lenges as a step toward an efficient and robust real-world
restoration framework. Inspired by successes in Masked
AutoEncoders (MAE), where the pretraining model on Im-
ageNet can efficiently adapt to the high-level represen-
tative vision benchmarks such as recognition and detec-
tion [18, 57], we argue that pretraining is still a potential
solution for BID task. We also note that pretraining on
MAE in low-level vision tasks is still under-explored. To fill
this gap, we resort to model pretraining via self-supervised
learning to acquire sufficient representative priors for BID.
In this paper, we propose a new Context-aware Pretrain-
ing ( CP), containing separation and reconstruction for cor-
rupted images. As shown in Figure 3, the pretext task isdesigned as a dual-branch pattern combining mixed image
separation and masked image reconstruction. Our intuition
underlying the proposed task is encouraging the network to
mine context information ( i.e., noise boundaries and types,
local and non-local semantics), and such knowledge can
be easily transferred to various restoration scenarios. We
also develop a pretrained model for BID using the trans-
former architecture, namely, Context-aware Pretrained Net-
work ( CPNet ). In contrast to previous methods, the pro-
posed CPNet can (1) remove arbitrary types or combina-
tions of noises at once, (2) avoid multi-head mask super-
vision of each source component (Figure 1 (f)), and (3) be
efficiently employed for high-fidelity restoration after fine-
tuning, as shown in Figure 2. To our knowledge, this work
provides the first framework to apply a self-supervised pre-
training learning strategy for BID task. Meanwhile, these
two branches are hierarchically connected with an informa-
tion fusion module that explicitly facilitates the feature in-
teraction through multi-scale self-attention. Furthermore,
we empirically subdue the learning difficulty by dividing
the restoration process into structure reconstruction dur-
ing pretraining and texture refinement during fine-tuning.
Instead of simply learning the mixed pattern and propor-
tionally scaling the pixel values in previous methods, our
method intuitively gives the model more “imagination” and
thus leads to a more compelling and robust performance un-
der complex scenes. Moreover, a novel flow sampling loss
combing with a conditional attribute loss is further intro-
duced for precise and faithful blind image decomposition.
Overall, our contributions are summarized as follows:
• Different from existing BID works, we introduce a new
self-supervised learning paradigm, called Context-
aware Pretraining ( CP) with two pretext tasks: mixed
image separation and masked image reconstruction.
To facilitate the feature learning, we also propose
Context-aware Pretrained Network ( CPNet ), which is
benefited from the proposed information fusion mod-
ule and multi-head prediction module for texture-
guided appearance flow and conditional attribute label.
• Extensive experiments on BID benchmarks substanti-
ate that our method achieves competitive performance
for blind image restoration. More importantly, our
method consistently outperforms competitors by large
margins in terms of efficiency, e.g., 3.4×fewer FLOPs
and 50 ×faster inference time over BIDeN [17].
|
Watson_Virtual_Occlusions_Through_Implicit_Depth_CVPR_2023 | Abstract
For augmented reality (AR), it is important that virtual
assets appear to ‘sit among’ real world objects. The virtual
element should variously occlude and be occluded by real
matter, based on a plausible depth ordering. This occlu-
sion should be consistent over time as the viewer’s camera
moves. Unfortunately, small mistakes in the estimated scene
depth can ruin the downstream occlusion mask, and thereby
the AR illusion. Especially in real-time settings, depths in-
ferred near boundaries or across time can be inconsistent.
In this paper, we challenge the need for depth-regression as
an intermediate step.
We instead propose an implicit model for depth and use
that to predict the occlusion mask directly. The inputs to
our network are one or more color images, plus the known
depths of any virtual geometry. We show how our occlusion
predictions are more accurate and more temporally stable
than predictions derived from traditional depth-estimation
models. We obtain state-of-the-art occlusion results on the
challenging ScanNetv2 dataset and superior qualitative re-
sults on real scenes.
| 1. Introduction
Augmented reality and digital image editing usually en-
tail compositing virtual rendered objects to look as if they
are present in a real-world scene. A key and elusive
part of making this effect realistic is occlusion . Looking
from a camera’s perspective, a virtual object should ap-
pear partially hidden when part of it passes behind a real
world object. In practice this comes down to estimating,
for each pixel, if the final rendering pipeline should dis-
play the real world object there vs. showing the virtual ob-
ject [53, 24, 36].
Typically, this per-pixel decision is approached by first
estimating the depth of each pixel in the real world image
[19, 90, 36]. Obtaining the depth to each pixel on the vir-
tual object is trivial, and can be computed via traditional
graphics pipelines [33]. The final mask can be estimated by
Figure 1. We address the problem of automatically estimating oc-
clusion masks to realistically place virtual objects in real scenes.
Our approach, where we directly predict masks, leads to more ac-
curate compositing compared with Lidar-based sensors or tradi-
tional state-of-the-art depth regression methods.
comparing the two depth maps: where the real world depth
value is smaller, the virtual content is occluded, i.e. masked.
We propose an alternative, novel approach. Given im-
ages of the real world scene and the depth map of the virtual
assets, our network directly estimates the mask for com-
positing. The key advantage is that the network no longer
has to estimate the real-valued depth for every pixel, and
instead focuses on the binary decision: is the virtual pixel
in front or behind the real scene here? Further, at infer-
ence time we can use the soft output of a sigmoid layer to
softly blend between the real and virtual, which can give
visually pleasing compositions [43], compared with those
created by hard thresholding of depth maps (see Figure 1).
Finally, temporal consistency can be improved using ideas
from temporal semantic segmentation that were difficult to
apply when regressing depth as an intermediate step.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9053
We have three contributions:
1. We frame the problem of compositing a virtual object
at a known depth into a real scene as a binary mask
estimation problem. This is in contrast to previous ap-
proaches that estimate depth to solve this problem.
2. We introduce metrics for occlusion evaluation, and
find our method results in more accurate and visually
pleasing composites compared with alternatives.
3. We show that competing approaches flicker, leading to
jarring occlusions. By framing the problem as segmen-
tation, we can use temporal smoothing methods, which
result in smoother predictions compared to baselines.
Our ‘implicit depth’ model ultimately results in
state-of-the-art occlusions on the challenging ScanNetv2
dataset [17]. Further, if depths are needed too, we can com-
pute dense depth by gathering multiple binary masks. Sur-
prisingly, this results in state-of-the-art depth estimation.
|
Wu_Multiview_Compressive_Coding_for_3D_Reconstruction_CVPR_2023 | Abstract
A central goal of visual recognition is to understand ob-
jects and scenes from a single image. 2D recognition has
witnessed tremendous progress thanks to large-scale learn-
ing and general-purpose representations. Comparatively,
3D poses new challenges stemming from occlusions not de-
picted in the image. Prior works try to overcome these by
inferring from multiple views or rely on scarce CAD models
and category-specific priors which hinder scaling to novel
settings. In this work, we explore single-view 3D recon-
struction by learning generalizable representations inspired
by advances in self-supervised learning. We introduce a
simple framework that operates on 3D points of single ob-
jects or whole scenes coupled with category-agnostic large-
scale training from diverse RGB-D videos. Our model, Mul-
tiview Compressive Coding (MCC), learns to compress the
input appearance and geometry to predict the 3D structure
by querying a 3D-aware decoder. MCC’s generality and ef-
ficiency allow it to learn from large-scale and diverse data
sources with strong generalization to novel objects imag-
ined by DALL ·E 2 or captured in-the-wild with an iPhone.
| 1. Introduction
Images depict objects and scenes in diverse settings.
Popular 2D visual tasks, such as object classification [7] and
segmentation [32, 83], aim to recognize them on the image
plane. But image planes do not capture scenes in their en-
tirety. Consider Fig. 1a. The toy’s left arm is not visible in
the image. This is framed by the task of 3D reconstruction:
given an image, fully reconstruct the scene in 3D.
3D reconstruction is a longstanding problem in AI with
applications in robotics and AR/VR. Structure from Mo-
tion [18, 60] lifts images to 3D by triangulation. Recently,
NeRF [37] optimizes radiance fields to synthesize novel
views. These approaches require many views of the same
scene during inference and do not generalize to novel scenes
from a single image. Others [16, 68] predict 3D from a sin-
gle image but rely on expensive CAD supervision [5, 59].
Project page: https://mcc3d.github.io
Attention
unprojectRGB-D Image 3D Reconstruction
(a)
(b)
…
QueriesEncodings(a)MCC Overview
Input Output Input Output Input Output
(b)3D Reconstructions by MCC
Figure 1. Multiview Compressive Coding (MCC). (a): MCC
encodes an input RGB-D image and uses an attention-based model
to predict the occupancy and color of query points to form the final
3D reconstruction. (b): MCC generalizes to novel objects captured
with iPhones (left) or imagined by DALL ·E 2 [47] (middle). It is
also general – it works not only on objects but also scenes (right).
Reminiscent of generalized cylinders [40], some intro-
duce object-specific priors via category-specific 3D tem-
plates [25,29,31], pose [42] or symmetries [73]. While im-
pressive, these methods cannot scale as they rely on onerous
3D annotations and category-specific priors which are not
generally true. Alas large-scale learning, which has shown
promising generalization results for images [45] and lan-
guage [2], is largely underexplored for 3D reconstruction.
Image-based recognition is entering a new era thanks to
domain-agnostic architectures, like transformers [10, 65],
and large-scale category-agnostic learning [19] . Motivated
by these advances, we present a scalable, general-purpose
model for 3D reconstruction from a single image. We in-
troduce a simple, yet effective, framework that operates di-
rectly on 3D points. 3D points are general as they can cap-
ture any objects or scenes and are more versatile and ef-
ficient than meshes and voxels. Their generality and ef-
ficiency enables large-scale category-agnostic training. In
turn, large-scale training makes our 3D model effective.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9065
Central to our approach is an input encoding and a que-
riable 3D-aware decoder. The input to our model is a single
RGB-D image, which returns the visible ( seen) 3D points
via unprojection. Image and points are encoded with trans-
formers. A new 3D point, sampled from 3D space, queries
a transformer decoder conditioned on the input to predict its
occupancy and its color. The decoder reconstructs the full,
seen andunseen , 3D geometry, as shown in Fig. 1a. Our
occupancy-based formulation, introduced in [36], frames
3D reconstruction as a binary classification problem and re-
moves constraints pertinent to specialized representations
(e.g., deformations of a 3D template) or a fixed resolution.
Being tasked with predicting the unseen 3D geometry of
diverse objects or scenes, our decoder learns a strong 3D
representation. This finding directly connects to recent ad-
vances in image-based self-supervised learning and masked
autoencoders (MAE) [19] which learn powerful image rep-
resentations by predicting masked (unseen) image patches.
Our model inputs single RGB-D images, which are ubiq-
uitous thanks to advances in hardware. Nowadays, depth
sensors are found in iPhone’s front and back cameras. We
show results from iPhone captures in §4 and Fig. 1b. Our
decoder predicts point cloud occupancies. Supervision is
sourced from multiple RGB-D views, e.g., video frames,
with relative camera poses, e.g., from COLMAP [54, 55].
The posed views produce 3D point clouds which serve as
proxy ground truth. These point clouds are far from “per-
fect” as they are amenable to sensor and camera pose noise.
However, we show that when used at scale they are suf-
ficient for our model. This suggests that 3D annotations,
which are expensive to acquire, can be replaced with many
RGB-D video captures, which are much easier to collect.
We call our approach Multiview Compressive Coding
(MCC), as it learns from many views, compresses appear-
ance and geometry and learns a 3D-aware decoder. We
demonstrate the generality of MCC by experimenting on six
diverse data sources: CO3D [50], Hypersim [51], Taskon-
omy [81], ImageNet [7], in-the-wild iPhone captures and
DALL·E 2 [47] generations. These datasets range from
large-scale captures of more than 50 common object types,
to holistic scenes, such as warehouses, auditoriums, lofts,
restaurants, and imaginary objects. We compare to state-
of-the-art methods, tailored for single objects [20, 50, 79]
and scene reconstruction [30] and show our model’s supe-
riority in both settings with a unified architecture. Enabled
by MCC’s general purpose design, we show the impact of
large-scale learning in terms of reconstruction quality and
zero-shot generalization on novel object and scene types.
|
Wang_Improving_Generalization_of_Meta-Learning_With_Inverted_Regularization_at_Inner-Level_CVPR_2023 | Abstract
Despite the broad interest in meta-learning, the gener-
alization problem remains one of the significant challenges
in this field. Existing works focus on meta-generalization
to unseen tasks at the meta-level by regularizing the meta-
loss, while ignoring that adapted models may not general-
ize to the task domains at the adaptation level. In this pa-
per, we propose a new regularization mechanism for meta-
learning – Minimax-Meta Regularization, which employs
inverted regularization at the inner loop and ordinary reg-
ularization at the outer loop during training. In particular,
the inner inverted regularization makes the adapted model
more difficult to generalize to task domains; thus, optimiz-
ing the outer-loop loss forces the meta-model to learn meta-
knowledge with better generalization. Theoretically, we
prove that inverted regularization improves the meta-testing
performance by reducing generalization errors. We conduct
extensive experiments on the representative scenarios, and
the results show that our method consistently improves the
performance of meta-learning algorithms.
| 1. Introduction
Meta-learning has been proven to be a powerful
paradigm for extracting well-generalized knowledge from
previous tasks and quickly learning new tasks [47]. It has
received increasing attention in many machine learning set-
tings such as few-shot learning [10, 45, 46, 50] and robust
learning [27,39,42], and can be deployed in many practical
applications [7,21,29,54]. The key idea of meta-learning is
to improve the learning ability of agents through a learning-
to-learn process. In recent years, optimization-based al-
gorithms have emerged as a popular approach for realiz-
ing the learning-to-learn process in meta-learning [10, 28].
These methods formulate the problem as a bi-level opti-
mization problem and have demonstrated impressive per-
*Equal contributions†Corresponding authorsformance across various domains, leading to significant at-
tention from the research community. The primary focus of
our paper is to further advance this line of research.
The training process of meta-learning takes place at two
levels [10, 19]. At the inner-level, a base model, which
is initialized using the meta-model’s parameters, adapts to
each task by taking gradient descent steps over the support
set. At the outer-level, a meta-training objective is opti-
mized to evaluate the generalization capability of the initial-
ization on all meta-training tasks over the query set, help-
ing to ensure that the model is effectively optimized for the
desired goal. With this learning-to-learn process, the final
trained meta-model could be regarded as the model with
good initialization to adapt to new tasks.
Despite the success of meta-learning, the additional level
of learning also introduces a new source of potential over-
fitting [36], which poses a significant challenge to the gen-
eralization of the learned initialization. This generalization
challenge is twofold: first, the meta-model must general-
ize to unseen tasks ( meta-generalization ); and second, the
adapted model must generalize to the domain of a specific
task, which we refer to as adaptation-generalization . As
the primary objective of meta-learning is to achieve strong
performance when adapting to new tasks, the ability of the
meta-model to generalize well is critical. Recent works
aim to address the meta-generalization problem by meta-
regularizations, such as constraining the meta-initialization
space [52], enforcing the performance similarity of the
meta-model on different tasks [20], and augmenting meta-
training data [33, 36, 51]. These approaches are verified to
enhance generalization to unseen tasks. However, they do
not address the problem of adaptation-generalization to the
data distribution of meta-testing tasks.
To address this issue, we propose Minimax-Meta Regu-
larization, a novel regularization mechanism that improves
both adaptation-generalization and meta-generalization.
Specifically, our approach particularly employs inverted
regularization at the inner-level to hinder the adapted
model’s generalizability to the task domain. This forces the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7826
meta-model to learn hypotheses that better generalize to the
task domains, which improves adaptation-generalization.
Meanwhile, we use ordinary regularization at the outer-
level to optimize the meta-model’s generalization to new
tasks, which helps meta-generalization. By improving both
adaptation-generalization and meta-generalization simulta-
neously, our method results in a more robust and effective
meta-learning regularization mechanism.
Theoretically, we prove that under certain assumptions,
if we add L2-Norm as the regularization term to the inner-
level loss function, the inverted regularization will reduce
the generalization bound of MAML, while the ordinary reg-
ularization will increase the generalization bound. In terms
of total test error, which includes both generalization error
and training bias caused by regularization, the inverted L2-
Norm also reduces the total test error when the reg parame-
ter is selected within a negative interval. These results sug-
gest that the regularization at the inner-level should be in-
verted. As it has been verified that ordinary regularization at
the outer-level helps the meta-generalization, our theory im-
plies that the proposed Minimax-Meta Regularization helps
both meta-generalization and adaptation-generalization.
We conduct experiments on the few-shot classification
problem for MAML [10] with different regularization types
(ordinary/inverted) at the inner- and outer-level. The results
demonstrate the efficacy of Minimax-Meta Regularization,
and support the theoretical results that regularization at the
inner-level improves test performance only when it’s in-
verted. Additionally, we empirically verify that Minimax-
Meta regularization can be applied with different types of
regularization terms (norm/entropy), implying the flexibil-
ity for applying the proposed method in practice.
|
Wallace_EDICT_Exact_Diffusion_Inversion_via_Coupled_Transformations_CVPR_2023 | Abstract
Finding an initial noise vector that produces an input
image when fed into the diffusion process (known as inver-
sion) is an important problem in denoising diffusion models
(DDMs), with applications for real image editing. The stan-
dard approach for real image editing with inversion uses
denoising diffusion implicit models (DDIMs [29]) to deter-
ministically noise the image to the intermediate state along
the path that the denoising would follow given the original
conditioning. However, DDIM inversion for real images
is unstable as it relies on local linearization assumptions,
which result in the propagation of errors, leading to incor-
rect image reconstruction and loss of content. To alleviate
these problems, we propose Exact Diffusion Inversion via
Coupled Transformations (EDICT), an inversion method
that draws inspiration from affine coupling layers. EDICT
enables mathematically exact inversion of real and model-
generated images by maintaining two coupled noise vectors
which are used to invert each other in an alternating fash-
ion. Using Stable Diffusion [25], a state-of-the-art latent
diffusion model, we demonstrate that EDICT successfully
reconstructs real images with high fidelity. On complex im-
age datasets like MS-COCO, EDICT reconstruction signif-
icantly outperforms DDIM, improving the mean square er-
ror of reconstruction by a factor of two. Using noise vectors
inverted from real images, EDICT enables a wide range of
image edits—from local and global semantic edits to image
stylization—while maintaining fidelity to the original image
structure. EDICT requires no model training/finetuning,
prompt tuning, or extra data and can be combined with any
pretrained DDM.
| 1. Introduction
Using the iterative denoising diffusion principle, denois-
ing diffusion models (DDMs) trained with web-scale data
can generate highly realistic images conditioned on input
text, layouts, and scene graphs [24, 25, 27]. After im-
age generation, the next important application of DDMs
being explored by the research community is that of im-age editing. Models such as DALL-E-2 [24] and Stable
Diffusion [25] can perform inpainting, allowing users to
edit images through manual annotation. Methods such as
SDEdit [20] have demonstrated that both synthetic and real
images can be edited using stroke or composite guidance via
DDMs. However, the goal of holistic image editing tools
that can edit any real/artificial image using purely text is
still a field of active research.
The generative process of DDMs starts with an initial
noise vector ( xT) and performs iterative denoising (typ-
ically with a guidance signal e.g. in the form of text-
conditional denoising), ending with a realistic image sam-
ple (x0). Reversing this generative process is key in solving
this image editing problem for one family of approaches.
Formally, this problem is known as “inversion” i.e., finding
the initial noise vector that produces the input image when
passed through the diffusion process.
A na ¨ıve approach for inversion is to add Gaussian noise
to the input image and perform a predefined number of
diffusion steps, which typically results in significant dis-
tortions [7]. A more robust method is adapting Denois-
ing Diffusion Implicit Models (DDIMs) [29]. Unlike the
commonly used Denoising Diffusion Probabilistic Models
(DDPMs) [9], the generative process in DDIMs is defined
in a non-Markovian manner, which results in a determinis-
tic denoising process. DDIM can also be used for inversion,
deterministically noising an image to obtain the initial noise
vector ( x0→xT).
DDIM inversion has been used for editing real images
through text methods such as DDIBs [30] and Prompt-to-
Prompt (P2P) image editing [7]. After DDIM inversion,
P2P edits the original image by running the generative pro-
cess from the noise vector and injecting conditioning infor-
mation from a new text prompt through the cross-attention
layers in the diffusion model, thus generating an edited im-
age that maintains faithfulness to the original content while
incorporating the edit. However, as noted in the original
P2P work [7], the DDIM inversion is unstable in many
cases—encoding from x0toxTand back often results in
inexact reconstructions of the original image as in Fig. 2.
These distortions limit the ability to perform significant ma-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22532
Orig.
Image
Golden
Retriever
Chihuahua
Poodle
Dalmatian
German
Shepherd
Husky
Orig. Image
Chihuahua
Dalmatian
HuskyFigure 1. EDICT enables complex real image edits, such as editing dog breeds. We highlight the fine-grain text preservation in the bottom
row of examples, with the message remaining even as the dog undergoes dramatic transformations. More examples, including baseline
comparisons for all image-breed pairs, are included in the Supplementary. All original images from the ImageNet 2012 validation set.
nipulations through text as increase in the corruption is cor-
related with the strength of the conditioning.
To improve the inversion ability of DDMs and enable
robust real image editing, we diagnose the problems in
DDIM inversion, and offer a solution: Exact Diffusion In-
version via Coupled Transformations (EDICT). EDICT is a
re-formulation of the DDIM process inspired by coupling
layers in normalizing flow models [4, 5, 12] that allows for
mathematically exact inversion. By maintaining two cou-
pled noise vectors in the diffusion process, EDICT enables
recovery of the original noise vector in the case of model-
generated images; and for real imagery, initial noise vec-
tors that are guaranteed to map to the original image when
the EDICT generative process is run. While EDICT dou-
bles the computation time of the diffusion process, it can
be combined with any pretrained DDM model and doesnot require any computationally-expensive model finetun-
ing, prompt tuning, or multiple images.
For the standard generative process, EDICT approxi-
mates DDIM well, resulting in nearly identical generations
given equal initial conditions For real images, EDICT can
recover a noise vector which yields an exact reconstruction
when used as input to the generative process. Experiments
with the COCO dataset [16] show that EDICT can recover
complex image features such as detailed textures, thin ob-
jects, subtle reflections, faces, and text, while DDIM fails to
do so consistently. Finally, using the initial noise vectors de-
rived from a real image with EDICT, we can sample from a
DDM and perform complex edits or transformations to real
images using textual guidance. We show editing capabili-
ties including local and global modifications of objects and
background and object transformations (Fig. 1).
22533
Original Image DDIM Unconditional
DDIM Conditional
EDICT
Recon. MSE = 0.077
Recon. MSE = 0.085 Recon. MSE = 0.069
Recon. MSE = 0.003 Recon. MSE = 0.038 Recon. MSE = 0.014
Recon. MSE = 0.011 Recon. MSE = 0.050 Recon. MSE = 0.044 “A banana is laying
on a small plate” “A dog”
“A couple standing
together holding Wii
controllers
next to a building.” Figure 2. While both unconditional and conditional DDIM [29]
often fail to accurately reconstruct real images, leading to loss of
global image structure and/or finer details, EDICT is able to al-
most perfectly reconstruct even complex scenes. Examples from
ImageNet and COCO with 50 DDIM steps. Captions used only in
DDIM Conditional reconstruction with a guidance scale of 3.
|
Wang_NeuWigs_A_Neural_Dynamic_Model_for_Volumetric_Hair_Capture_and_CVPR_2023 | Abstract
The capture and animation of human hair are two of the
major challenges in the creation of realistic avatars for the
virtual reality. Both problems are highly challenging, be-
cause hair has complex geometry and appearance and ex-
hibits challenging motion. In this paper, we present a two-
stage approach that models hair independently of the head
to address these challenges in a data-driven manner. The
first stage, state compression, learns a low-dimensional la-
tent space of 3D hair states including motion and appear-
ance via a novel autoencoder-as-a-tracker strategy. To bet-
ter disentangle the hair and head in appearance learning,
we employ multi-view hair segmentation masks in combi-
nation with a differentiable volumetric renderer. The sec-
ond stage optimizes a novel hair dynamics model that per-
forms temporal hair transfer based on the discovered latent
codes. To enforce higher stability while driving our dynam-
ics model, we employ the 3D point-cloud autoencoder from
the compression stage for de-noising of the hair state. Our
model outperforms the state of the art in novel view synthe-
sis and is capable of creating novel hair animations without
relying on hair observations as a driving signal.†
| 1. Introduction
The ability to model the details of human hair with high
fidelity is key to achieving realism in human avatar creation:
hair can be very important to one’s appearance! The real-
ism of human hair involves many aspects like geometry, ap-
pearance and interaction with light. The sheer number of
hair strands leads to a very complex geometry, while the in-
teractions between light and hair strands leads to non-trivial
view-dependent appearance changes. The dynamics of hair
creates another axis for evaluating the realism of a human
avatar while being similarly hard to capture and model due
to the complexity of the hair motion space as well as severe
self-occlusions.
These problems lead to two major challenges for cre-
†Project page at https://ziyanw1.github.io/neuwigs/.
Figure 1. Animation from Single View Captures. Our model can
generate realistic hair animation from single view video based on
head motion and gravity direction. Original captures of subjects
wearing a wig cap are shown in red boxes.
ating realistic avatars with hair: appearance modeling and
dynamics modeling. While modern capture systems can re-
construct high fidelity hair appearance from a sparse and
discrete set of real world observations frame by frame, no
dynamics information is discovered through this process.
To capture dynamics, we need to perform tracking to align
those reconstructions in both space and time. However, do-
ing tracking and reconstruction do not directly solve the
problem of novel motion generation. To achieve that, we
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8641
need to go beyond the captured data in space and time by
creating a controllable dynamic hair model.
In conventional animation techniques, hair geometry is
created by an artist manually preparing 3D hair grooms.
Motion of the 3D hair groom is created by a physics simula-
tor where an artist selects the parameters for the simulation.
This process requires expert knowledge. In contrast, data-
driven methods aim to achieve hair capture and animation
in an automatic way while preserving metric photo-realism.
Most of the current data-driven hair capture and animation
approaches learn to regress a dense 3D hair representation
that is renderable directly from per-frame driving signals,
without modeling dynamics.
However, there are several factors that limit the practical
use of these data-driven methods for hair animation. First
of all, these methods mostly rely on sophisticated driving
signals, like multi-view images [22, 33], a tracked mesh of
the hair [24], or tracked guide hair strands [42], which are
hard to acquire. Furthermore, from an animation perspec-
tive, these models are limited to rendering hair based on hair
observations and cannot be used to generate novel motion of
hair. Sometimes it is not possible to record the hair driving
signals at all. We might want to animate hair for a person
wearing accessories or equipment that (partially) obstructs
the view of their hair, for example VR glasses; or animate a
novel hair style for a subject.
To address these limitations of existing data-driven hair
capture and animation approaches, we present a neural dy-
namic model that is able to animate hair with high fidelity
conditioned on head motion and relative gravity direction.
By building such a dynamic model, we are able to gener-
ate hair motions by evolving an initial hair state into a fu-
ture one, without relying on per-frame hair observation as
a driving signal. We utilize a two-stage approach for cre-
ating this dynamic model: in the first stage, state compres-
sion, we perform dynamic hair capture by learning a hair
autoencoder from multi-view video captures with an evolv-
ing tracking algorithm. Our method is capable of capturing
a temporally consistent, fully renderable volumetric repre-
sentation of hair from videos with both head and hair. Hair
states with different time-stamps are parameterized into a
semantic embedding space via the autoencoder. In the sec-
ond stage, we sample temporally adjacent pairs from the
semantic embedding space and learn a dynamic model that
can perform the hair state transition between each state in
the embedding space given the previous head motion and
gravity direction. With such a dynamic model, we can per-
form hair state evolution and hair animation in a recurrent
manner which is not driven by existing hair observations.
As shown in Fig. 1, our method is capable of generating
realistic hair animation with different hair styles of single
view captures of a moving head with a wig cap. In sum-
mary:• We present NeuWigs, a novel end-to-end data-driven
pipeline with a volumetric autoencoder as the backbone
for real human hair capture and animation, learnt from
multi-view RGB images.
• We learn the hair geometry, tracking and appearance end-
to-end with a novel autoencoder-as-a-tracker strategy for
hair state compression, where the hair is modeled sepa-
rately from the head using multi-view hair segmentation.
• We train an animatable hair dynamic model that is robust
to drift using a hair state denoiser realized by the 3D au-
toencoder from the compression stage.
|
Wang_Masked_Image_Modeling_With_Local_Multi-Scale_Reconstruction_CVPR_2023 | Abstract
Masked Image Modeling (MIM) achieves outstanding
success in self-supervised representation learning. Unfortu-
nately, MIM models typically have huge computational bur-
den and slow learning process, which is an inevitable ob-
stacle for their industrial applications. Although the lower
layers play the key role in MIM, existing MIM models con-
duct reconstruction task only at the top layer of encoder.
The lower layers are not explicitly guided and the interac-
tion among their patches is only used for calculating new
activations. Considering the reconstruction task requires
non-trivial inter-patch interactions to reason target signals,
we apply it to multiple local layers including lower and up-
per layers. Further, since the multiple layers expect to learn
the information of different scales, we design local multi-
scale reconstruction, where the lower and upper layers re-
construct fine-scale and coarse-scale supervision signals
respectively. This design not only accelerates the represen-
tation learning process by explicitly guiding multiple lay-
ers, but also facilitates multi-scale semantical understand-
ing to the input. Extensive experiments show that with sig-
nificantly less pre-training burden, our model achieves com-
parable or better performance on classification, detection
and segmentation tasks than existing MIM models. Code is
available with both MindSpore and PyTorch.
| 1. Introduction
Recently, Masked Image Modeling (MIM) [2, 21, 50]
achieves outstanding success in the field of self-supervised
visual representation learning, which is inspired by the
Masked Language Modeling (MLM) [4, 29] in natural lan-
guage processing and benefits from the development of vi-
sion transformers [16, 33, 45]. MIM learns semantic repre-
sentations by first masking some parts of the input and then
predicting their signals based on the unmasked parts, e.g.,
normalized pixels [21,50], discrete tokens [2,15], HOG fea-
ture [47], deep features [1, 54] or frequencies [32, 49].
*Corresponding author.Despite superior performance on various downstream
tasks, these models have huge computational burden and
slow learning process [26]. They typically require thou-
sands of GPU Hours for pre-training on ImageNet-1K to
get generalizing representations. Since we expect to pre-
train these models on more massive amount of unlabeled
data (e.g., free Internet data) to obtain more generalizing
representations in practice, the pre-training efficiency is
an inevitable bottleneck limiting the industrial applications
of MIM. How to accelerate the representation learning in
MIM is an important topic. To this end, MAE [21] pio-
neered the asymmetric encoder-decoder strategy, where the
costly encoder only operates few visible patches and the
lightweight decoder takes all the patches as input for pre-
diction. Further, GreenMIM [26] extends the asymmetric
encoder-decoder strategy to hierarchical vision transform-
ers (e.g., Swin [33]). Besides, [8, 19, 30] shrinks the input
resolution to lessen the input patches, thereby reducing the
computational burden. However, they all aim to accelerate
the encoding process rather than the representation learning.
In MIM, the learning of upper layers depends on that of
lower ones during pre-training, since the upper-layer fea-
tures are calculated from the lower layers. Besides, during
fine-tuning the upper layers are typically tuned quickly to
adapt to the downstream task while the lower ones change
more slowly and need to be well-learned [2, 25, 52]. Even
fine-tuning only the several upper layers and freezing the
others can obtain similar performance [21]. Therefore, the
lower layers of encoder play the key role in MIM. However,
all existing MIM models only conduct reconstruction task at
the top layer of encoder and the lower ones are not explicitly
guide, thus the interaction among their patches is only used
for calculating the activations of the next layer. Considering
the reconstruction task requires non-trivial inter-patch in-
teractions to reason target signals, we apply it to both lower
and upper layers to explicitly guide them and thus accelerate
the overall learning process. Using tiny decoder is sufficient
for each local reconstruction task and does not significantly
increase the computational burden.
How to properly conduct reconstruction tasks at multi-
ple local layers is a non-trivial problem. For example, ap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2122
0 200 400 600 800 1000 1200 1400 1600 1800 2000 2200
GPU Hours (h)82.883.083.283.483.683.884.084.2Top-1 Accuracy (%)
BEiT
MAELoMaR
LocalMIM (ours)(a) ViT-B
0 200 400 600 800 1000 1200 1400
GPU Hours (h)83.083.283.483.683.884.084.2Top-1 Accuracy (%)
SimMIM192
GreenMIMLocalMIM (ours) (b) Swin-B
Figure 1. Top-1 fine-tuning accuracy on ImageNet-1K vs. Pre-training duration. The duration is estimated on a machine with one Tesla
V100-32G GPU, CUDA 10.2 and PyTorch 1.8. ‘GPU Hours’ is the running time on single GPU.
plying the top-layer reconstruction task to carefully chosen
local layers of ViT [16] can not achieve meaningful im-
provement. In general, the lower layers exploit low-level
information and the upper ones learn high-level informa-
tion [18, 36], so it is not appropriate to use the supervision
signals of same scale for multiple local reconstruction tasks.
Here ’scale’ is the spatial size of the supervision signals cal-
culated from the divided input regions, e.g., the signals from
thep×pregions in an input of H×Wresolution has the
scale ofH
p×W
p. The fine-scale and coarse-scale super-
visions typically contain low-level and high-level informa-
tion of the input respectively, and these multi-scale supervi-
sions from input are widely ignored by existing MIM mod-
els. To this end, we propose local multi-scale reconstruction
where the lower and upper layers reconstruct fine-scale and
coarse-scale supervisions respectively. This design not only
accelerates the representation learning process, but also fa-
cilitates multi-scale semantical understanding to the input.
When the decoded predictions have different scale with the
supervisions (e.g., on ViT), we use the deconvolution/pool
operations to rescale them. We also apply the asymmetric
encoder-decoder strategy [21, 26] for quick encoding. Our
model, dubbed as LocalMIM, are illustrated in Fig. 2 (a).
Overall, we summarize our contributions as follows.
• To the best of our knowledge, this is the first work in
MIM to conduct local reconstructions and use multi-
scale supervisions from the input.
• Our model is architecture-agnostic and can be used in
both columnar and pyramid architectures.
• From extensive experiments, we find that 1) Lo-
calMIM is more efficient than existing MIM models,
as shown in Fig. 1 and Table 1. For example, Lo-
calMIM achieves the best MAE result with 3.1×ac-
celeration on ViT-B and the best GreenMIM result
with6.4×acceleration on Swin-B. 2) In terms of top-
1 fine-tuning accuracy on ImageNet-1K, LocalMIM
achieves 84.0%using ViT-B and 84.1%using Swin-B with significantly less pre-training duration than ex-
isting MIM models. The obtained representations also
achieve better generalization on detection and segmen-
tation downstream tasks, as shown in Table 2 and 3.
|
Xie_SmartBrush_Text_and_Shape_Guided_Object_Inpainting_With_Diffusion_Model_CVPR_2023 | Abstract
Generic image inpainting aims to complete a corrupted
image by borrowing surrounding information, which barely
generates novel content. By contrast, multi-modal inpaint-
ing provides more flexible and useful controls on the in-
painted content, e.g., a text prompt can be used to describe
an object with richer attributes, and a mask can be used to
constrain the shape of the inpainted object rather than be-
ing only considered as a missing area. We propose a new
diffusion-based model named SmartBrush for completing a
missing region with an object using both text and shape-
guidance. While previous work such as DALLE-2 and Sta-
ble Diffusion can do text-guided inapinting they do not sup-
port shape guidance and tend to modify background tex-
ture surrounding the generated object. Our model incor-
porates both text and shape guidance with precision con-
trol. To preserve the background better, we propose a novel
training and sampling strategy by augmenting the diffusion
U-net with object-mask prediction. Lastly, we introduce
a multi-task training strategy by jointly training inpaint-
* Work done during internship at Adobe.ing with text-to-image generation to leverage more training
data. We conduct extensive experiments showing that our
model outperforms all baselines in terms of visual quality,
mask controllability, and background preservation.
| 1. Introduction
Traditional image inpainting aims to fill the missing area
in images conditioned on surrounding pixels, lacking con-
trol over the inpainted content. To alleviate this, multi-
modal image inpainting offers more control through addi-
tional information, e.g. class labels, text descriptions, seg-
mentation maps, etc. In this paper, we consider the task
of multi-modal object inpainting conditioned on both a text
description and the shape of the object to be inpainted (see
Fig. 1). In particular, we explore diffusion models for this
task inspired by their superior performance in modeling
complex image distributions and generating high-quality
images.
Diffusion models (DMs) [7, 24], e.g., Stable Diffu-
sion [20], DALL-E [18, 19], and Imagen [21] have shown
promising results in text-to-image generation. They can
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22428
also be adapted to the inpainting task by replacing the ran-
dom noise in the background region with a noisy version of
the original image during the diffusion reverse process [14].
However, this leads to undesirable samples since the model
cannot see the global context during sampling [16]. To ad-
dress this, GLIDE [16] and Stable Inpainting (inpainting
specialist v1.5 from Stable Diffusion) [20] randomly erase
part of the image and fine-tune the model to recover the
missing area conditioned on the corresponding image cap-
tion. However, semantic misalignment between the missing
area (local content) and global text description may cause
the model to fill in the masked region with background in-
stead of precisely following the text prompt as shown in
Fig. 1 (“Glide” and “Stable Inpainting”). We refer to this
phenomenon as text misalignment .
An alternative way to perform multi-modal image in-
painting is to utilize powerful language-vision models, e.g.,
CLIP [17]. Blended diffusion [2] uses CLIP to compute
the difference between the image embedding and the input
text embedding and then injects the difference into the sam-
pling process of a pretrained unconditional diffusion model.
However, CLIP models tend to capture the global and high-
level image features, thus there is no incentive to generate
objects aligning with the given mask (see “Blended Diffu-
sion” in Fig. 1). We denote this phenomenon as mask mis-
alignment . A recent GAN-based work CogNet [28] pro-
poses to use shape information from instance segmentation
dataset and predict the class of missing objects to address
this problem. But it doesn’t support text input. Another is-
sue for existing inpainting methods is background preser-
vation in which case they often produce distorted back-
ground surrounding the inpainted object as shown in Fig. 1
(bottom row).
To address above challenges, we introduce a precision
factor into the input masks, i.e., our model not only takes
a mask as input but also information about how closely the
inpainted object should follow the mask’s shape. To achieve
this we generate different types of masks from fine to coarse
by applying Gaussian blur to accurate instance masks and
use the masks and their precision type to train the guided
diffusion model. With this setup, we allow users to ei-
ther use coarse masks which will contain the desired object
somewhere within the mask or to provide detailed masks
that outline the shape of the object exactly. Thus, we can
supply very accurate masks and the model will fill the en-
tire mask with the object described by the text prompt (see
the first row in Fig. 1), while, on the other hand, we can
also provide very coarse masks ( e.g., a bounding box) and
the model is free to insert the desired object within the mask
area such that the object is roughly bounded by the mask.
One important characteristic, especially for coarse masks
such as bounding boxes, is that we want to keep the back-
ground within the inpainted area consistent with the originalimage. To achieve this, we not only encourage the model to
inpaint the masked region but also use a regularization loss
to encourage the model to predict an instance mask of the
object it is generating. At test time we replace the coarse
mask with the predicted mask during sampling to preserve
background as much as possible which leads to more con-
sistent results (second row in Fig. 1).
We evaluate our model on several challenging object in-
painting tasks and show that it achieves state-of-the-art re-
sults on object inpainting across several datasets and exam-
ples. Our user study shows that users prefer the outputs of
our model as compared to DALLE-2 and Stable Inpainting
across several axes of evaluation such as shape, text align-
ment, and realism. To summarize our contributions:
• We introduce a text and shape guided object inpainting
diffusion model, which is conditioned on object masks
of different precision, achieving a new level of control
for object inpainting.
• To preserve the image background with coarse input
masks, the model is trained to predict a foreground
object mask during inpainting for preserving original
background surrounding the synthesized object.
• We propose a multi-task training strategy by jointly
training object inpainting with text-to-image genera-
tion to leverage more training data.
|
Wu_CHMATCH_Contrastive_Hierarchical_Matching_and_Robust_Adaptive_Threshold_Boosted_Semi-Supervised_CVPR_2023 | Abstract
The recently proposed FixMatch and FlexMatch have
achieved remarkable results in the field of semi-supervised
learning. But these two methods go to two extremes as Fix-
Match and FlexMatch use a pre-defined constant threshold
for all classes and an adaptive threshold for each category,
respectively. By only investigating consistency regulariza-
tion, they also suffer from unstable results and indiscrim-
inative feature representation, especially under the situa-
tion of few labeled samples. In this paper, we propose a
novel CHMatch method, which can learn robust adaptive
thresholds for instance-level prediction matching as well as
discriminative features by contrastive hierarchical match-
ing. We first present a memory-bank based robust thresh-
old learning strategy to select highly-confident samples. In
the meantime, we make full use of the structured informa-
tion in the hierarchical labels to learn an accurate affinity
graph for contrastive learning. CHMatch achieves very sta-
ble and superior results on several commonly-used bench-
marks. For example, CHMatch achieves 8:44%and9:02%
error rate reduction over FlexMatch on CIFAR-100 under
WRN-28-2 with only 4 and 25 labeled samples per class,
respectively1.
| 1. Introduction
Deep learning [18, 33, 41] achieves great success in the
past decade based on large-scale labeled datasets. However,
it is generally hard to collect and expensive to manually an-
notate such kind of large dataset in practice, which limits its
application. Semi-supervised learning (SSL) attracts much
attention recently, since it can make full use of a few labeled
and massive unlabeled data to facilitate the classification.
For the task of SSL [9, 16, 31, 32], various methods have
Corresponding authors.
1Project address: https://github.com/sailist/CHMatchbeen proposed and promising results have been achieved.
Consistency regularization [44] is one of the most in-
fluential techniques in this area. For example, pseudo-
ensemble [3] and temporal ensembling [23] investigate
the instance-level robustness before and after perturbation.
Mean teacher [37] introduces the teacher-student frame-
work and studies the model-level consistency. SNTG [27]
further constructs the similarity graph over the teacher
model to guide the student learning. However, the super-
vised signal generated by only this strategy is insufficient
for more challenging tasks.
Recently, by combining pseudo-labeling and consis-
tency between weak and strong data augmentations, Fix-
Match [34] achieves significant improvement. But it relies
on a high fixed threshold for all classes, and only a few
unlabeled samples whose prediction probability is above
the threshold are selected for training, resulting in undesir-
able efficiency and convergence. Towards this issue, Flex-
Match [42] proposes a curriculum learning [4] strategy to
learn adjustable class-specific threshold, which can well im-
prove the results and efficiency. But it still suffers from
the following limitations: (1) The results of both FixMatch
and FlexMatch are unstable and of large variances, which is
shown in Figure 1(a), especially when there are only a small
amount of labeled samples; (2) Only instance-level consis-
tency is investigated, which neglects inter-class relationship
and may make the learned feature indiscriminative.
To address the above issues, we propose a novel
CHMatch method based on hierarchical label and con-
trastive learning, which takes both the instance-level pre-
diction matching and graph-level similarity matching into
account. Specifically, we first present a memory-bank based
robust adaptive threshold learning strategy, where we only
need one parameter to compute the near-global threshold
for all categories. We compare this strategy with FixMatch
in Figure 1(b). Then this adaptive threshold is used for
instance-level prediction matching under the similar Fix-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15762
Ours: Dynamic proportion
leads to adaptive threshold
(b) Different threshold learning strategy All Train Sample
(order by pseudo label confidence)
FixMatch:
Fixed Threshold0.95 fish
CIFAR-100
(c) Hierarchical label structurepeopleorchid
poppy
tulipsunflowerrose
baby
boy
womanmangirl
(a) Results on CIFAR-100Figure 1. Motivation of our method. (a) The results of FixMatch and FlexMatch are unstable and of large variances, while our method
can handle this issue. (b) FixMatch sets fixed threshold, while our method sets dynamic proportions in different epoch, leading to adaptive
threshold. (c) Many datasets have hierarchical label structure, and we aim to take advantage of this to promote the SSL.
Match paradigm. More importantly, we further propose a
hierarchical label guided graph matching module for con-
trastive feature learning, where the key lies in the construc-
tion of an accurate affinity graph. As shown in Figure 1(c),
we notice that categories in many datasets as well as real-
world applications have a hierarchical structure, which is
neglected by existing methods for semi-supervised learning.
We aim to make use of the coarse-grained labels to guide the
general classification, which can also provide extra supervi-
sion signals especially under the situation of limited labeled
samples. For implementation, we first add another head for
the coarse-grained classification. Then each affinity graph
is constructed by the fine-grained and coarse-grained clas-
sification branches. Together with the near-global thresh-
old, we can get the precise affinity relationship after graph
matching, which is then used for contrastive learning. We
also conduct extensive experiments on several commonly-
used datasets to verify the effectiveness of the proposed
method as well as each module.
Our main contributions can be summarized as follows:
We propose a novel CHMatch method, which con-
tains instance-level and graph-level matching for as-
signment and feature learning, respectively. To the best
of our knowledge, this is the firststudy that makes full
use of the structured information matching in hierar-
chical labels to promote semi-supervised learning.
We come up with a memory-bank based highly-
confident sample selection strategy, which can gen-
erate robust adaptive threshold for prediction-level
matching, leading to more robust results, and accel-
erate the training process.
Benefit from the contrastive hierarchical matching, our
method can construct a more accurate affinity graph
for the proposed contrastive learning module, leading
to more discriminative feature representation.
We conduct extensive experiments on four benchmark
datasets under different backbones, and the proposed
CHMatch outperforms these state-of-the-art methods. |
Wang_Multilateral_Semantic_Relations_Modeling_for_Image_Text_Retrieval_CVPR_2023 | Abstract
Image-text retrieval is a fundamental task to bridge vi-
sion and language by exploiting various strategies to fine-
grained alignment between regions and words. This is
still tough mainly because of one-to-many correspondence,
where a set of matches from another modality can be ac-
cessed by a random query. While existing solutions to
this problem including multi-point mapping, probabilistic
distribution, and geometric embedding have made promis-
ing progress, one-to-many correspondence is still under-
explored. In this work, we develop a Multilateral Semantic
Relations Modeling (termed MSRM ) for image-text re-
trieval to capture the one-to-many correspondence between
multiple samples and a given query via hypergraph model-
ing. Specifically, a given query is first mapped as a prob-
abilistic embedding to learn its true semantic distribution
based on Mahalanobis distance. Then each candidate in-
stance in a mini-batch is regarded as a hypergraph node
with its mean semantics while a Gaussian query is mod-
eled as a hyperedge to capture the semantic correlations
beyond the pair between candidate points and the query.
Comprehensive experimental results on two widely used
datasets demonstrate that our MSRM method can outper-
form state-of-the-art methods in the settlement of multi-
ple matches while still maintaining the comparable perfor-
mance of instance-level matching.
| 1. Introduction
Image and text are two important information carriers to
help human and intelligent agents to better understand the
real world. Many explorations [9, 18, 35] have been con-
ducted in the computer vision as well as natural language
processing domains to bridge these two modalities [16]. As
a fundamental yet challenging topic in this research, image-
*This work was supported by the Sichuan Science and Technology
Program, China (2023YFG0289), National Natural Science Foundation of
China (62020106008, 62220106008, and U20B2063), and the Guangdong
Basic and Applied Basic Research Foundation (2022A1515110576).
†Corresponding author, [email protected].
t1: A man that is standing in the dirt with a bat.
t2: A batter at a baseball game swinging his bat.
t3: A baseball player is in the middle of his swing as the catcher is ready to catch the ball.Image to Text
Results:Query image :
A batter at a
baseball game
swinging his bat.Text to Image Query text :
Results:
Figure 1. Examples of one-to-many correspondence caused by the
inherent nature of different modalities. The existing point-to-point
mapping can not capture the semantic richness of data.
text retrieval can benefit other vision-language tasks [11] in
two ways, e.g. images search for a given sentence and the
retrieval of descriptions for an image query, and spread to a
variety of applications, such as person search [48], sketch-
based image retrieval [33], and food recipe retrieval [52].
Due to the power of deep metric learning [30, 31] in vi-
sual embedding augmentation, its core idea is intuitively
extended into image-text retrieval to consider the domain
differences. The naive strategy [3, 8, 38] is based on
triplet loss to learn distinctive representations at the global
level only with the help of positive pair and a hard neg-
ative one. However, such random sampling cannot effec-
tively select informative pairs, which causes a slow con-
vergence and poor performance [43]. Thus, several re-
weighting methods [1,4,42,43] are proposed to address this
issue by assigning different weights to positive and nega-
tive pairs. Moreover, a flat vector is difficult to infer the
complex relationships among many objects existing in a vi-
sual scene [16]. Hence, advanced methods formulate var-
ious attention mechanisms [2, 15, 20, 40, 50, 51] to distin-
guish important features from those negligible or irrelevant
ones based on Top-K region features obtained from the pre-
trained Faster R-CNN [29].
Actually, the prevailing image-text retrieval approaches
are instance-based, which only focus on the match of the
ground-truth sample. Despite their amazing success, image-
text retrieval is still very difficult because of the one-to-
many correspondence [6] where a set of candidates can be
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2830
obtained. This phenomenon is partially caused by the inher-
ent heterogeneity between vision and language. In detail, an
image can cover all objects in a given scene yet lacks con-
text reasoning like text [23], while a textual description may
only describe a part scene of interest based on the subjec-
tive choices [6]. As illuminated by Figure 1, in the case
of image retrieval under the textual description of ‘ A bat-
ter at a baseball game swinging his bat ’, the ground-truth
imagev1can be retrieved with effort but other instances
with sufficient similarity like v2andv3are possibly dis-
carded. A similar phenomenon also exists in another case of
descriptions search for a given image. The essential cause
of multiple matches is the point-to-point mapping strategy
adopted by the instance-level approaches. That is, they only
struggle to capture the one-to-one correspondence based on
the ground-truth pairs in the semantic space. Undoubtedly,
such a plain strategy suffers from insufficient representation
in one-to-many correspondence.
Recently, several works attempt to learn more distinc-
tive representations by cross-modal integration [23, 45] and
progressive relevance learning [22, 24]. However, they still
adopt point-to-point mapping and can not address the is-
sue of multiple matches. Based on the hedged instance
embedding [25] and the box embedding [17, 37], Proba-
bilistic Cross-Modal Embedding (PCME) [6] and Point-to-
Rectangle Matching (P2RM) [41] are successively devel-
oped to learn richer representations based on semantic un-
certainty capture. Motivated by them [6, 41], this work in-
troduces a novel Multilateral Semantic Relations Modeling
(MSRM ) method to capture the one-to-many correspon-
dence between a given query and candidates in image-text
retrieval. Concretely, our work mainly includes two parts:
semantic distribution learning for a query and multilateral
relations modeling for retrieval. The first part maps a given
query as a probabilistic embedding to learn its true seman-
tic distribution based on Mahalanobis distance. Then each
candidate instance in a mini-batch is regarded as a hyper-
graph node with its mean semantics while a Gaussian query
is modeled as a hyperedge. Afterwards, the second part
leverages the hyperedge convolution operation to capture
the beyond pairwise semantic correlations between candi-
date points and the query.
In summary, our contributions can be concluded as:
• We introduce an interpretable method named Multilat-
eral Semantic Relations Modeling to better resolve the
one-to-many correspondence for image-text retrieval.
• We propose the Semantic Distribution Learning mod-
ule to extract the true semantics of a query based on
Mahalanobis distance, which can infer more accurate
multiple matches.
• We leverage the hyperedge convolution to model thehigh-order correlations between a Gaussian query and
candidates for further improving the accuracy.
|
Xiong_Similarity_Metric_Learning_for_RGB-Infrared_Group_Re-Identification_CVPR_2023 | Abstract
Group re-identification (G-ReID) aims to re-identify a
group of people that is observed from non-overlapping cam-
era systems. The existing literature has mainly addressed
RGB-based problems, but RGB-infrared (RGB-IR) cross-
modality matching problem has not been studied yet. In this
paper, we propose a metric learning method Closest Per-
mutation Matching (CPM) for RGB-IR G-ReID. We model
each group as a set of single-person features which are ex-
tracted by MPANet, then we propose the metric Closest Per-
mutation Distance (CPD) to measure the similarity between
two sets of features. CPD is invariant with order changes of
group members so that it solves the layout change prob-
lem in G-ReID. Furthermore, we introduce the problem
of G-ReID without person labels. In the weak-supervised
case, we design the Relation-aware Module (RAM) that ex-
ploits visual context and relations among group members
to produce a modality-invariant order of features in each
group, with which group member features within a set can
be sorted to form a robust group representation against
modality change. To support the study on RGB-IR G-ReID,
we construct a new large-scale RGB-IR G-ReID dataset
CM-Group. The dataset contains 15,440 RGB images and
15,506 infrared images of 427 groups and 1,013 identi-
ties. Extensive experiments on the new dataset demonstrate
the effectiveness of the proposed models and the complex-
ity of CM-Group. The code and dataset are available at:
https://github.com/WhollyOat/CM-Group .
| 1. Introduction
Group re-identification (G-ReID) is the problem of as-
sociating a group of people that appears in disjoint cam-
era views. The significant importance in video surveillance
*Corresponding Author
(a)
(b)
Figure 1. The members within a group are independent in ap-
pearance and the visual similarity of two people does not indicate
whether they are in a group. In Figure 1b , images in the same
column are from the same person. Images with the same color are
from the same group.
has yield increasing attention and research efforts by the
community [28, 30, 36]. Compared to single-person re-
identification (ReID) which deals with a single person, gen-
erally G-ReID regards 2 to 6 people as a group and treats
two groups with at least 60 %the same individuals as the
same group. Hence, the main challenge of G-ReID is to
construct robust representations of groups with appearance
changes and group topology changes.
The existing works based on current G-ReID datasets
have made impressive strides, but there still remain sev-
eral important issues need to be resolved. First, available
datasets for G-ReID are limited in different aspects. For ex-
ample, the extant largest dataset for G-ReID City1M [36]
is synthesized and has huge domain gap between real im-
ages. The commonly used real datasets such as Road
Group [28], DukeMTMC Group [28] and CSG [30] are lim-
ited in amount of groups and images. Therefore, for real
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13662
applications, it is in need of a simulation of real scenarios
which includes large amount of people and variable scenes.
Another challenge we notice is that although G-ReID
tackles group matching problem, most deep-learning-based
works rely on labels of individuals to train deep nets for fea-
ture extraction [8,40]. The fully supervised training scheme
requires very large amounts of labour resources to make an-
notations, which is cost and time-consuming. This disad-
vantage makes it very hard to construct large-scale dataset
and impedes development of G-ReID.
To facilitate the study of G-ReID towards real-world
applications, we first introduce the RGB-infrared cross-
modality group re-identification (RGB-IR G-ReID) prob-
lem. Since infrared mode is widely used by surveillance
cameras at night, matching infrared images captured in dark
scenes with RGB images captured in bright scenes has been
an significant problem for ReID and G-ReID research. As
shown in Figure 1, RGB images have a huge domain gap
between infrared images. Meanwhile the appearances of
group members are independent of each other. Therefore
RGB-IR G-ReID not only handles modality discrepancy but
also faces challenges from group retrieval. When person
labels are available, we propose the Closest Permutation
Matching (CPM) framework. We adopt a state-of-the-art
RGB-IR ReID method MPANet to train a person feature ex-
tractor and model each group as a set of group member fea-
tures. To measure the similarity of two groups, we calculate
the Closest Permutation Distance (CPD) between two sets
of extracted features. CPD is a new metric that represents
the least distance of two sets of features under all permuta-
tions. In the weak-supervised case without person labels,
we do not know the identities of group members, which
makes it hard to train a person feature extractor. So we
propose a Relation-aware Module (RAM) to extract order
of group members which is invariant to modality changes.
RAM calculates visual relations between individuals within
a group to generate pseudo order and guide the network to
learn intrinsic orderings within groups.
Furthermore, we have collected a new dataset called
Cross-Modality Group ReID (CM-Group) dataset. Com-
pared to existing G-ReID datasets, CM-Group has several
new features. 1) CM-Group contains 15,440 RGB im-
ages, 15,506 infrared images, 427 groups and 1,013 per-
sons, which is, to our best knowledge, the first RGB-IR
cross-modality G-ReID dataset and the largest real-world
G-ReID dataset. 2) The raw videos are captured by 6 cam-
eras at 6 different scenes over a time span of 6 months,
including large variations of illumination and viewpoint,
clothes changes and scale changes. 3) All images are orig-
inal frames of raw videos, i.e. all background information
is reserved, which enables researchers to mine useful infor-
mation in background. More details of CM-Group will be
discussed in Section 4.The main contributions of this work include:
• We propose the Closest Permutation Matching (CPM)
to find the best match of group images with the
permutation-invariant metric Closest Permutation Dis-
tance (CPD). The CPM is resistant to group layout
changes and achieves excellent performances on CM-
Group.
• We introduce the problem of G-ReID without person
labels and propose the Relation-aware Module (RAM)
to leverage mutual relations of group members. Our
experiments show that RAM can extract a modality-
invariant order of members in a group regardless of
appearance and layout changes.
• We contribute a large-scale RGB-IR cross-modality G-
ReID dataset CM-Group, which supports more com-
prehensive study on G-ReID.
|
Wang_DeepVecFont-v2_Exploiting_Transformers_To_Synthesize_Vector_Fonts_With_Higher_Quality_CVPR_2023 | Abstract
Vector font synthesis is a challenging and ongoing prob-
lem in the fields of Computer Vision and Computer Graph-
ics. The recently-proposed DeepVecFont [27] achieved
state-of-the-art performance by exploiting information of
both the image and sequence modalities of vector fonts.
However, it has limited capability for handling long se-
quence data and heavily relies on an image-guided outline
refinement post-processing. Thus, vector glyphs synthesized
by DeepVecFont still often contain some distortions and ar-
tifacts and cannot rival human-designed results. To address
the above problems, this paper proposes an enhanced ver-
sion of DeepVecFont mainly by making the following three
novel technical contributions. First, we adopt Transform-
ers instead of RNNs to process sequential data and design a
relaxation representation for vector outlines, markedly im-
proving the model’s capability and stability of synthesizing
long and complex outlines. Second, we propose to sample
auxiliary points in addition to control points to precisely
align the generated and target B ´ezier curves or lines. Fi-
nally, to alleviate error accumulation in the sequential gen-
eration process, we develop a context-based self-refinement
module based on another Transformer-based decoder to re-
move artifacts in the initially synthesized glyphs. Both qual-
itative and quantitative results demonstrate that the pro-
posed method effectively resolves those intrinsic problems
of the original DeepVecFont and outperforms existing ap-
proaches in generating English and Chinese vector fonts
with complicated structures and diverse styles.
*Zhouhui Lian is the corresponding author.
This work was supported by National Language Committee of China
(Grant No.: ZDI135-130), Project 2020BD020 supported by PKU-Baidu
Fund, Center For Chinese Font Design and Research, and Key Laboratory
of Science, Technology and Standard in Press Industry (Key Laboratory of
Intelligent Press Media Technology).
(b) DeepVecFont w/ refinement(c) Ours(d) Ground Truth
(a) DeepVecFont w/o refinement
Figure 1. Visualization of the vector glyphs synthesized by Deep-
VecFont and Ours, where different colors denote different drawing
commands. (a) DeepVecFont w/o refinement suffers from location
shift. (b) DeepVecFont w/ refinement has both over-smoothness
(see green circles) and under-smoothness (see blue circles). (c)
Our method can directly synthesize visually-pleasing results with
compact and coordinated outlines. Zoom in for better inspection.
| 1. Introduction
Vector fonts, in the format of Scalable Vector Graph-
ics (SVGs), are widely used in displaying documents, arts,
and media contents. However, designing high-quality vec-
tor fonts is time-consuming and costly, requiring extensive
experience and professional skills from designers. Auto-
matic font generation aims to simplify and facilitate the font
designing process: learning font styles from a small set of
user-provided glyphs and then generating the complete font
library. until now, there still exist enormous challenges due
to the variety of topology structures, sequential lengths, and
styles, especially for some writing systems such as Chinese.
Recent years have witnessed significant progress [5, 17]
made by deep learning-based methods for vector font gen-
eration. Nevertheless, vector fonts synthesized by these
existing approaches often contain severe distortions and
are typically far from satisfactory. More recently, Wang
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18320
and Lian [27] proposed DeepVecFont that utilizes a dual-
modality learning architecture by exploiting the features
of both raster images and vector outlines to synthesize
visually-pleasing vector glyphs and achieve state-of-the-art
performance. However, DeepVecFont tends to bring lo-
cation shift to the raw vector outputs (Fig. 1(a)), which
are then further refined according to the synthesized im-
ages. Specifically, DeepVecFont adopts a differentiable ras-
terizer [14] to fine-tune the coordinates of the raw vector
glyphs by aligning their rasterized results and the synthe-
sized images. However, after the above process, the refined
vector outputs tend to be over-fitted to the inherent noise in
the synthesized images. Thus, there often exist suboptimal
outlines (Fig. 1(b)) with over-smoothed corners (green cir-
cles) or under-smoothed adjacent connections (blue circles)
in the final synthesized vector fonts, making them unsuited
to be directly used in real applications.
To address the above-mentioned issues, we propose a
new Transformer-based [25] encoder-decoder architecture,
named DeepVecFont-v2, to generate high-fidelity vector
glyphs with compact and coordinated outlines. Firstly,
we observed that the commonly used SVG representation,
which shares the same starting and ending points between
adjacent drawing commands, is more suitable for the learn-
ing process of RNNs [9] than Transformers [25]. RNNs
simulate a recurrent drawing process, where the next move-
ment is determined according to the current hidden state fed
with the current drawing command. Therefore, the start-
ing point of the following drawing command can be omit-
ted (replaced by the ending point of the current drawing
command). On the contrary, Transformers make drawing
prediction based on the self-attention operations performed
on any two drawing commands, whether adjacent or not.
Therefore, to make the attention operator receive the com-
plete information of their positions, the starting point of
each drawing command cannot be replaced by the ending
point of the previous command. Based on the above ob-
servation, we propose a relaxation representation that mod-
els these two points separately and merges them via an ex-
tra constraint. Secondly, although the control points of a
B´ezier curve contain all the primitives, we found that the
neural networks still need more sampling points from the
curve to perform a better data alignment. Therefore, we
sample auxiliary points distributed along the B ´ezier curves
when computing the proposed B ´ezier curve alignment loss.
Thirdly, to alleviate the error accumulation in the sequential
generation process, we design a self-refinement module that
utilizes the context information to further remove artifacts
in the initially synthesized results. Experiments conducted
on both English and Chinese font datasets demonstrate the
superiority of our method in generating complicated and di-
verse vector fonts and its capacity for synthesizing longer
sequences compared to existing approaches. To summarize,the major contributions of this paper are as follows:
- We develop a Transformer-based generative model,
accompanied by a relaxation representation of vector
outlines, to synthesize high-quality vector fonts with
compact and coordinated outlines.
- We propose to sample auxiliary points in addition
to control points to precisely align the generated
and target outlines, and design a context-based self-
refinement module to fully utilize the context informa-
tion to further remove artifacts.
- Extensive experiments have been conducted to verify
that state-of-the-art performance can be achieved by
our method in both English and Chinese vector font
generation.
|
Xie_Adversarially_Robust_Neural_Architecture_Search_for_Graph_Neural_Networks_CVPR_2023 | Abstract
Graph Neural Networks (GNNs) obtain tremendous suc-
cess in modeling relational data. Still, they are prone to
adversarial attacks, which are massive threats to applying
GNNs to risk-sensitive domains. Existing defensive meth-
ods neither guarantee performance facing new data/tasks
or adversarial attacks nor provide insights to understand
GNN robustness from an architectural perspective. Neural
Architecture Search (NAS) has the potential to solve this
problem by automating GNN architecture designs. Never-
theless, current graph NAS approaches lack robust design
and are vulnerable to adversarial attacks. To tackle these
challenges, we propose a novel Robust Neural Architecture
search framework for GNNs (G-RNA). Specifically, we de-
sign a robust search space for the message-passing mech-
anism by adding graph structure mask operations into the
search space, which comprises various defensive operation
candidates and allows us to search for defensive GNNs. Fur-
thermore, we define a robustness metric to guide the search
procedure, which helps to filter robust architectures. In this
way, G-RNA helps understand GNN robustness from an ar-
chitectural perspective and effectively searches for optimal
adversarial robust GNNs. Extensive experimental results on
benchmark datasets show that G-RNA significantly outper-
forms manually designed robust GNNs and vanilla graph
NAS baselines by 12.1% to 23.4% under adversarial attacks.
| 1. Introduction
Graph Neural Networks are well-known for modeling
relational data and are applied to various downstream real-
world applications like recommender systems [37], knowl-
edge graph completion [25], traffic forecasting [6], drug
production [45], etc. Meanwhile, like many other deep neu-
ral networks, GNNs are notorious for their vulnerability
under adversarial attacks [33], especially in risk-sensitive
domains, such as finance and healthcare. Since GNNs model
*Equal contribution.
†Corresponding authors.node representations by aggregating the neighborhood in-
formation, an attacker could perform attacks by perturbing
node features and manipulating relations among nodes [55].
For example, in the user-user interaction graph, a fraudster
may deliberately interact with other important/fake users
to mislead the recommender system or fool credit scoring
models [36].
A series of defense methods on graph data have
been developed to reduce the harm of adversarial attacks.
Preprocessing-based approaches like GCN-SVD [8] and
GCN-Jaccard [38] conduct structure cleaning before training
GNNs, while attention-based models like RGCN [51] and
GNN-Guard [46] learn to focus less on potential perturbed
edges. However, these methods rely on prior knowledge of
the attacker. For example, GCN-SVD leverages the high-
rank tendency of graph structure after Nettack [53], and
GCN-Jaccard depends on the homophily assumption on the
graph structure. As a result, current approaches may fail to
adapt to scenarios when encountering new data and tasks
or when new attack methods are proposed. Additionally,
previous methods largely neglect the role of GNN architec-
tures in defending against adversarial attacks, lacking an
architectural perspective in understanding GNN robustness.
In order to reduce human efforts in neural architecture
designs, Neural Architecture Search (NAS) has become in-
creasingly popular in both the research field and industry.
Though NAS has the potential of automating robust GNN
designs, existing graph NAS methods [1, 10, 24, 49, 50] are
inevitably susceptible to adversarial attacks since they do
not consider adversarial settings and lack robustness de-
signs [48]. Therefore, how to adopt graph NAS to search
for optimal robust GNNs in various environments, and in
the meantime, fill the gap of understanding GNN robustness
from an architectural perspective, remains a huge challenge.
To address the aforementioned problems and to under-
stand GNN robustness from an architectural perspective, we
propose a novel Robust Neural Architecture search frame-
work for Graph neural networks (G-RNA), which is the first
attempt to exploit powerful graph neural architecture search
in robust GNNs, to our best knowledge. Specifically, we
first design a novel, expressive, and robust search space with
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8143
adversarial sample
clean graph
message passing layer
layer aggregation
…
Robust Search Space on Graph Structure
skip
connection
skip
connection
correlation coefficient
neighbor aggregator
combine function
graph structure mask
message passing layer
robust
GNN
perturbed graph
attack
proxy
Architecture Search with
Robustness Metric
forward
×
=
𝑨
𝑴
(
𝟏
)
𝑨
(
𝟏
)
output
evaluation
embedding
embedding
𝑨𝒄𝒄
+
𝑹𝒐𝒃𝒖𝒔𝒕𝒏𝒆𝒔𝒔
2
0
3
1
2
0
3
1
supernet
forwardFigure 1. The overall framework of G-RNA. Given a clean graph, the supernet built upon our robust search space is trained in a single-path
one-shot way. Then, the attack proxy produces several adversarial samples based on the clean graph and we search for robust GNNs with the
proposed robustness metric. Finally, we evaluate the optimal robust GNN on graphs perturbed by the attacker.
graph structure mask operations. The green part in Fig. 1
shows the fine-grained search space. The graph structure
mask operations cover important robust essences of graph
structure and could recover various existing defense meth-
ods as well. We train the supernet built upon our designed
search space in a single-path one-shot way [14]. Second, we
propose a robustness metric that could properly measure the
architecture’s robustness. Based on the clean graph, an at-
tack proxy produces several adversarial samples. We search
robust GNNs using our robustness metric with clean and
generated adversarial samples. A simple illustration of the
robustness metric is shown in the yellow part in Fig. 1. Af-
ter searching for the optimal robust GNN architecture with
the evolutionary algorithm, we retrain the top-selected ro-
bust architectures from scratch and perform evaluations. Our
contributions are summarized as follows:
•We develop a robust neural architecture search framework
for GNNs, which considers robust designs in graph neural
architecture search for the first time to the best of our
knowledge. Based on this framework, we can understand
adversarial robustness for GNNs from an architectural
perspective.
•We propose a novel robust search space by designing de-
fensive operation candidates to automatically select the
most appropriate defensive strategies when confronting
perturbed graphs. Besides, we design a robustness met-
ric and adopt an evolutionary algorithm together with a
single-path one-shot graph NAS framework to search for
the optimal robust architectures.
•Extensive experimental results demonstrate the efficacy of
our proposed method. G-RNA outperforms state-of-the-artrobust GNNs by 12.1% to 23.4% on benchmark datasets
under heavy poisoning attacks.
|
Xie_RA-CLIP_Retrieval_Augmented_Contrastive_Language-Image_Pre-Training_CVPR_2023 | Abstract
Contrastive Language-Image Pre-training (CLIP) is at-
tracting increasing attention for its impressive zero-shot
recognition performance on different down-stream tasks.
However, training CLIP is data-hungry and requires lots
of image-text pairs to memorize various semantic concepts.
In this paper, we propose a novel and efficient frame-
work: Retrieval Augmented Contrastive Language-Image
Pre-training (RA-CLIP) to augment embeddings by online
retrieval. Specifically, we sample part of image-text data as
a hold-out reference set. Given an input image, relevant
image-text pairs are retrieved from the reference set to
enrich the representation of input image. This process can
be considered as an open-book exam: with the reference
set as a cheat sheet, the proposed method doesn’t need
to memorize all visual concepts in the training data. It
explores how to recognize visual concepts by exploiting
correspondence between images and texts in the cheat
sheet. The proposed RA-CLIP implements this idea and
comprehensive experiments are conducted to show how
RA-CLIP works. Performances on 10 image classification
datasets and 2 object detection datasets show that RA-
CLIP outperforms vanilla CLIP baseline by a large margin
on zero-shot image classification task (+12.7%), linear
probe image classification task (+6.9%) and zero-shot ROI
classification task (+2.8%).
| 1. Introduction
Traditional visual representation learning systems are
trained to predict a fixed set of predetermined image cate-
gories [12, 16, 22, 34]. This limits their transferability since
additional labeled training data are required to recognize
new visual concepts. Recently, vision-language pre-training
approaches such as CLIP [29] emerge as a promising
alternative which introduces text description as supervision.
CLIP aligns image modality and text modality by learning a
*indicates equal contribution.
37.753.5010203040506070CLIPRA-CLIPImageNet32.847.20102030405060CLIPRA-CLIPImageNetV276.0 89.450.060.070.080.090.0100.0CLIPRA-CLIPCIFAR1048.662.301020304050607080CLIPRA-CLIPCIFAR100
69.876.930405060708090ClLIPRA-CLIPCaltech 10116.149.0 0102030405060CLIPRA-CLIPOxford Pets50.846.50102030405060CLIPRA-CLIPSUN39728.325.60510152025303540CLIPRA-CLIPDTD
11.126.105101520253035CLIPRA-CLIPStanford Dogs21.843.8102030405060CLIPRA-CLIPFood 10144.448.4202530354045505560CLIPRA-CLIPMS COCO21.623.2051015202530CLIPRA-CLIPLVISFigure 1. Transferring the CLIP and RA-CLIP to 12 down-stream
visual recognition datasets for zero-shot evaluation. Our RA-CLIP
achieves better results in 10 out of 12 datasets, and brings +12.7%
averaged improvements on the 10 image classification datasets and
2.8% averaged improvements on the 2 object detection datasets..
modality-shared representation. During pre-training, CLIP
learns to pull matched image-text pairs together and push
non-matched pairs apart. After pre-training, CLIP can be
transferred to zero-shot image classification task: categories
can be referred by textual descriptions, and the image clas-
sification task can be converted to image-to-text retrieval
task. Experimental results show that CLIP performs well
on zero-shot image classification task, e.g., for ImageNet
zero-shot classification task, CLIP can match the accuracy
of ImageNet pre-trained ResNet50, even that CLIP doesn’t
use any of the 1.28 million training examples of ImageNet
for training.
Despite the impressive zero-shot performance, CLIP
requires lots of image-text pairs to train encoders and
memorize various semantic concepts, which limits its ap-
plications since it is not affordable for most laboratories
and companies. Recent works [24, 25] try to alleviate this
limitation by taking full advantage of existing data and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19265
train encoders to memorize concepts as many as possible,
e.g., DeCLIP [24] explores widespread supervision from
given image-text pairs, and SLIP [25] introduces self-
supervised learning which helps encoders learn better visual
representation.
In this paper, we propose a novel and efficient way to
make use of image-text pairs. We sample part of image-
text data as a hold-out reference set . Given an input image,
our model first retrieves similar images from the reference
set with an unsupervised pre-trained image retrieval model,
then we use the relationship between retrieved images and
texts to augment the representation of input image. A
heuristic explanation of the idea is that, it can be considered
as an open-book exam: our model doesn’t have to memorize
all visual concepts in the training data, but learns to
recognize visual concepts with the help of a cheat sheet (i.e.,
the reference set). We propose a framework called Retrieval
Augmented Contrastive Language-Image Pre-training (RA-
CLIP) to implement this idea. Although using the same
amount of image-text data with the vanilla CLIP, RA-CLIP
achieves better zero-shot classification performance and
linear probe classification performance.
Our contributions are three-fold:
For contrastive language-image pre-training (CLIP),
we present a novel and efficient utilization of image-
text pairs. Concretely, we construct a hold-out ref-
erence set composed by image-text pairs. Given an
input image, we find relevant image-text pairs and use
them to help us build better representation for the input
image.
We propose Retrieval Augmented Contrastive
Language-Image Pre-training (RA-CLIP), a
framework to implement the idea described above.
We conduct comprehensive experiments to validate
the effectiveness of each block. Visualization results
are also provided to explain how RA-CLIP works.
We compare the proposed RA-CLIP with previous
methods on a dozen of commonly used visual recog-
nition benchmarks. Experimental results show that
our proposed method significantly outperforms vanilla
CLIP baseline and other recently proposed methods.
|
Xie_Unpaired_Image-to-Image_Translation_With_Shortest_Path_Regularization_CVPR_2023 | Abstract
Unpaired image-to-image translation aims to learn
proper mappings that can map images from one domain to
another domain while preserving the content of the input
image. However, with large enough capacities, the network
can learn to map the inputs to any random permutation of
images in another domain. Existing methods treat two do-
mains as discrete and propose different assumptions to ad-
dress this problem. In this paper, we start from a different
perspective and consider the paths connecting the two do-
mains. We assume that the optimal path length between
the input and output image should be the shortest among
all possible paths. Based on this assumption, we propose a
new method to allow generating images along the path and
present a simple way to encourage the network to find the
shortest path without pair information. Extensive experi-
ments on various tasks demonstrate the superiority of our
approach. The code is available at https://github.com/Mid-
Push/santa.
| 1. Introduction
Many important problems in computer vision can be
viewed as image-to-image translation problems, including
domain adaptation [22, 46], super-resolution [7, 66] and
medical image analysis [2]. Let XandYrepresent two
domains, respectively. In unpaired image-to-image transla-
tion, we are given two collections of images from the two
domains with distributions {PX, PY}without pair informa-
tion. Our goal is to find the true conditional (joint) distribu-
tion of two domains PY|X(PX,Y); with the true conditional
(joint) distribution, we are able to translate the input images
in one domain such that the outputs look like images in an-
other domain while the semantic information of the input
images is preserved. For example, given old human faces in
one domain and young human faces in another domain, we
Figure 1. Illustration of our shortest path assumption. Almost
existing methods only use two discrete domains {X,Y}. Instead,
we consider the paths connecting two domains and assume that
the optimal mapping generates shortest path, e.g., x1→y1rather
thanx1→y2.
want to learn the true joint distribution across the two do-
mains. Then we can translate the old face into young face
image while preserving the important information (e.g., the
identity) of the input face (see Fig. 1).
However, there can exist an infinite number of joint dis-
tributions corresponding to the given two marginal distri-
butions [39]. It means that the problem is highly ill-posed
and we may not derive meaningful results without any ad-
ditional assumptions. As one of the most popular assump-
tions, cycle consistency [71] assumes that the optimal map-
ping should be one-to-one and has been achieving impres-
sive performance in many tasks. However, the one-to-one
mapping assumption can be restrictive sometimes [48], es-
pecially when images from one domain have additional in-
formation compared to the other domain. As an alterna-
tive, contrastive learning based method [48] has become
popular recently. It assumes that the mutual information
of patches in the same location of the input and translated
image should be maximized. Then it employs the infoNCE
loss [57] to associate corresponding patches and disassoci-
ate them from others. However, it has been shown that the
choices of samples in the contrastive learning can have large
impact on the results and most of recent image translation
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10177
methods are trying to improve it, e.g., negative sample min-
ing [30, 58, 68] and positive sample mining [24].
Departing from existing unpaired image translation
methods, we consider the paths connecting images in the
first domain to images in the second domain and propose a
shortest path assumption to address the ill-posed joint dis-
tribution learning problem. Specifically, we assume that the
path, connecting one image in the first domain to its cor-
responding paired image in another domain, should be the
shortest one compared to other paths (see Fig. 1). However,
as we are only given images from two discrete domains,
we have no access to real images on the paths. To address
this problem, we first make a shared generating assump-
tion to allow synthesizing images along the path: images
are generated with the same function from the shared latent
code space and the domain variable (as a surrogate of the
domain-specific information). Then by changing the value
of the domain variable, we obtain a path connecting the in-
put and output image. In order to find a proper mapping, we
need to minimize the path length, which can be formulated
as expectation of the norm of the Jacobian matrix with re-
spect to the domain variable. To reduce the cost of comput-
ing the high-dimensional Jacobian matrix computation, we
further propose to use finite difference method to approx-
imate the Jacobian matrix and penalize the squared norm.
The contributions of this paper lie in the following aspects:
1. We propose a shortest path assumption to address the
ill-posed unpaired image translation problem. We also
conduct experiments to justify our assumption.
2. We propose a simple and efficient way to allow syn-
thesizing images on the path connecting two domains
and penalize the path length to find a proper mapping.
Our method is the fastest one among all unpaired im-
age translation methods.
3. Extensive experiments are conducted to demonstrate
the superiority of our proposed method.
|
Wang_Bi-LRFusion_Bi-Directional_LiDAR-Radar_Fusion_for_3D_Dynamic_Object_Detection_CVPR_2023 | Abstract
LiDAR and Radar are two complementary sensing ap-
proaches in that LiDAR specializes in capturing an object’s
3D shape while Radar provides longer detection ranges as
well as velocity hints. Though seemingly natural, how to
efficiently combine them for improved feature representa-
tion is still unclear. The main challenge arises from that
Radar data are extremely sparse and lack height informa-
tion. Therefore, directly integrating Radar features into
LiDAR-centric detection networks is not optimal. In this
work, we introduce a bi-directional LiDAR-Radar fusion
framework, termed Bi-LRFusion, to tackle the challenges
and improve 3D detection for dynamic objects. Technically,
Bi-LRFusion involves two steps: first, it enriches Radar’s
local features by learning important details from the LiDAR
branch to alleviate the problems caused by the absence of
height information and extreme sparsity; second, it com-
bines LiDAR features with the enhanced Radar features in
a unified bird’s-eye-view representation. We conduct ex-
tensive experiments on nuScenes and ORR datasets, and
show that our Bi-LRFusion achieves state-of-the-art perfor-
mance for detecting dynamic objects. Notably, Radar data
in these two datasets have different formats, which demon-
strates the generalizability of our method. Codes are avail-
able at https://github.com/JessieW0806/Bi-
LRFusion .
| 1. Introduction
LiDAR has been considered as the primary sensor in the
perception subsystem of most autonomous vehicles (A Vs)
due to its capability of providing accurate position mea-
surements [9, 16, 32]. However, in addition to object po-
sitions, A Vs are also in an urgent need for estimating the
motion state information ( e.g., velocity), especially for dy-
namic objects. Such information cannot be measured by
*Corresponding Author: Jiajun Deng and Yanyong Zhang.
(a) Uni -directional fusion (b) Our bi -directional fusion
(c) AP gains of RadarNet *over categories with different average height.0.72 0.39 0.312.243.413.87
-1.000.001.002.003.004.00
-1.00.01.02.03.04.0
Car Motor. Bicycle Truck Bus TrailerAP gains of RadarNet * Average height Radar sensor’s height
-0.5+0.0+0.5+1.3+2.7+3.5AP gain (%)
0.98
Height (m)L Radar -to-LiDAR LiDAR Data RRadar Data LiDAR -to-Radar R2L L2R
L
REncoder R2L
EncoderEncoder R2L
Encoder L2RL
RFigure 1. An illustration of (a) uni-directional LiDAR-Radar fu-
sion mechanism, (b) our proposed bi-directional LiDAR-Radar fu-
sion mechanism, and (c) the average precision gain (%) of uni-
directional fusion method RadarNet∗against the LiDAR-centric
baseline CenterPoint [40] over categories with different average
height (m). We use * to indicate it is re-produced by us on the Cen-
terPoint. The improvement by involving Radar data is not consis-
tent for objects with different height, i.e.,taller objects like truck,
bus and trailer do not enjoy as much performance gain. Note that
all height values are transformed to the LiDAR coordinate system.
LiDAR sensors since they are insensitive to motion. As a
result, millimeter-wave Radar (referred to as Radar in this
paper) sensors are engaged because they are able to infer the
object’s relative radial velocity [21] based upon the Doppler
effect [28]. Besides, on-vehicle Radar usually offers longer
detection range than LiDAR [36], which is particularly use-
ful on highways and expressways. In the exploration of
combining LiDAR and Radar data for ameliorating 3D dy-
namic object detection, the existing approaches [22, 25, 36]
follow the common mechanism of uni-directional fusion,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13394
as shown in Figure 1 (a). Specifically, these approaches di-
rectly utilize the Radar data/feature to enhance the LiDAR-
centric detection network without first improving the qual-
ity of the feature representation of the former.
However, independently extracted Radar features are not
enough for refining LiDAR features, since Radar data are
extremely sparse and lack the height information1. Specifi-
cally, taking the data from the nuScenes dataset [4] as an ex-
ample, the 32-beam LiDAR sensor produces approximately
30,000 points, while the Radar sensor only captures about
200 points for the same scene. The resulting Radar bird’s
eye view (BEV) feature hardly attains valid local infor-
mation after being processed by local operators ( e.g., the
neighbors are most likely empty when a non-empty Radar
BEV pixel is convolved by convolutional kernels). Be-
sides, on-vehicle Radar antennas are commonly arranged
horizontally, hence missing the height information in the
vertical direction. In previous works, the height values of
the Radar points are simply set as the ego Radar sensor’s
height. Therefore, when features from Radar are used for
enhancing the feature of LiDAR, the problematic height in-
formation of Radar leads to unstable improvements for ob-
jects with different heights. For example, Figure 1 (c) illus-
trates this problem. The representative method RadarNet
falls short in the detection performance for tall objects –
the truck class even experiences 0.5% AP degradation after
fusing the Radar data.
In order to better harvest the benefit of LiDAR and Radar
fusion, our viewpoint is that Radar features need to be more
powerful before being fused. Therefore, we first enrich the
Radar features – with the help of LiDAR data – and then
integrate the enriched Radar features into the LiDAR pro-
cessing branch for more effective fusion. As depicted in
Figure 1 (b), we refer to this scheme as bi-directional fu-
sion. And in this work, we introduce a framework, Bi-
LRFusion , to achieve this goal. Specifically, Bi-LRFusion
first encodes BEV features for each modality individually.
Next, it engages the query-based LiDAR-to-Radar (L2R)
height feature fusion and query-based L2R BEV feature fu-
sion, in which we query and group LiDAR points and Li-
DAR BEV features that are close to the location of each
non-empty gird cell on the Radar feature map, respectively.
The grouped LiDAR raw points are aggregated to formu-
late pseudo-Radar height features, and the grouped LiDAR
BEV features are aggregated to produce pseudo-Radar BEV
features. The generated pseudo-Radar height and BEV fea-
tures are fused to the Radar BEV features through concate-
nation. After enriching the Radar features, Bi-LRFusion
then performs the Radar-to-LiDAR (R2L) fusion in a uni-
fied BEV representation. Finally, a BEV detection network
1This shortcoming is due to today’s Radar technology, which may
likely change as the technology advances very rapidly, e.g.,new-generation
4D Radar sensors [3].consisting of a BEV backbone network and a detection head
is applied to output 3D object detection results.
We validate the merits of bi-directional LiDAR-Radar
fusion via evaluating our Bi-LRFusion on nuScenes and
Oxford Radar RobotCar (ORR) [1] datasets. On nuScenes
dataset, Bi-LRFusion improves the mAP( ↑) by 2.7% and
reduces the mA VE( ↓) by 5.3% against the LiDAR-centric
baseline CenterPoint [40], and remarkably outperforms the
strongest counterpart, i.e., RadarNet, in terms of AP by ab-
solutely 2.0% for cars and 6.3% for motorcycles. Moreover,
Bi-LRFusion generalizes well on the ORR dataset, which
has a different Radar data format and achieves 1.3% AP im-
provements for vehicle detection.
In summary, we make the following contributions:
• We propose a bi-directional fusion framework, namely
Bi-LRFusion, to combine LiDAR and Radar features
for improving 3D dynamic object detection.
• We devise the query-based L2R height feature fusion
and query-based L2R BEV feature fusion to enrich
Radar features with the help of LiDAR data.
• We conduct extensive experiments to validate the mer-
its of our method and show considerably improved re-
sults on two different datasets.
|
Wang_Dionysus_Recovering_Scene_Structures_by_Dividing_Into_Semantic_Pieces_CVPR_2023 | Abstract
Most existing 3D reconstruction methods result in either
detail loss or unsatisfying efficiency. However, effective-
ness and efficiency are equally crucial in real-world ap-
plications, e.g., autonomous driving and augmented reality.
We argue that this dilemma comes from wasted resources
on valueless depth samples. This paper tackles the prob-
lem by proposing a novel learning-based 3D reconstruc-
tion framework named Dionysus. Our main contribution
is to find out the most promising depth candidates from es-
timated semantic maps. This strategy simultaneously en-
ables high effectiveness and efficiency by attending to the
most reliable nominators. Specifically, we distinguish unre-
liable depth candidates by checking the cross-view semantic
consistency and allow adaptive sampling by redistributing
depth nominators among pixels. Experiments on the most
popular datasets confirm our proposed framework’s effec-
tiveness.
| 1. Introduction
Recovering 3D structures from 2D images is one of
the most fundamental computer vision tasks [7, 25, 26,
68] and has wide applications in various scenarios (e.g.,
autonomous driving, metaverse, and augmented reality).
Thanks to the popularity of smartphones, high-quality RGB
videos are always obtainable. Since each image describes
only a tiny piece of the whole scenario [31,33], reconstruc-
tion from multiple frames [44, 50] is more attractive than
from a single image [13, 14]. Although model quality and
real-time response are equally essential, the existing 3D
reconstruction methods have difficulty performing well in
both aspects. For example, multi-view stereo (MVS) mod-
els [63,73] consume seconds on each image, while real-time
approaches [9, 57, 64] lead to missing details or large-areamess.
The mainstream reconstruction methods [15,63,72] gen-
erally sample a set of candidate depths or voxels, then use
neural networks (e.g., CNNs [21] and transformers [58]) to
evaluate each candidate’s reliability. The candidate num-
ber is generally small because of the evaluation networks’
high computational demand. Consequently, the reconstruc-
tion quality becomes unsatisfying because the ground truth
is less likely to be sampled.
Pursuing a higher candidate sampling density, CasMVS-
Net [15] shrinks the depth range in a coarse-to-fine fashion;
IS-MVSNet [63] searches for new candidates according to
the estimated error distribution; NeuralRecon [57] prunes
the voxels thought to be unreliable in the previous predic-
tions. All these methods select candidates according to the
initial depth estimations. Notably, small objects and del-
icate structures are hard to recover in the beginning low-
resolution phase because they require a high resolution to
distinguish. As a result, the final reconstruction often suf-
fers from missing objects and crude details because the ac-
tual depth can be outside the final search range when the
initial estimation is unreliable. However, decent reconstruc-
tion quality on delicate objects is crucial to many real-world
applications. Specifically, traffic accidents may occur if
any pedestrian or warning post fails to recover; frequent
collisions and even fire disasters might happen if cleaning
robots cannot well reconstruct table legs and electric ca-
bles. Besides the defects in meticulous structures, coarse-
to-fine models also have room to improve efficiency. As
mentioned, the mainstream coarse-to-fine methods sample
and then evaluate initial candidates multiple times to locate
the most valuable candidate depths. Notably, examining the
preliminary candidates may be more expensive (usually two
times more [15, 78]) than assessing the final nominators.
In addition to the costly evaluation networks widely
adopted in coarse-to-fine architectures, another natural so-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12576
Figure 1. The overall architecture of our framework. We first estimate a semantic map for the current frame and then retrieve the former
semantic estimations. After that, we locate the most valuable depth candidates through the semantic maps. Finally, we predict the depth
map for the current frame by examining each depth candidate.
lution is to measure each depth candidate’s reliability ac-
cording to the photometric consistency among RGB images
or feature maps from different views. The basic logic be-
hind these methods is that pixels from distinct perspectives
should own the same color if they are the correct projec-
tions from the same 3D point. However, a 3D point may
look distinct in different views due to illumination, trans-
parency, and reflection. Moreover, candidates close to but
not precisely coinciding with the ground truth may have dif-
ferent colors for delicate objects, thus leading to low photo-
metric consistency. Consequently, the found candidate may
correspond to a pixel distant from the ground truth but of a
similar color.
To get rid of detail defects and keep efficiency, it be-
comes crucial to accurately and efficiently find the most
promising candidate depths. This paper proposes a novel
high-quality real-time 3D reconstruction framework named
Dionysus, which looks for depth hypotheses based on se-
mantic estimations. Precisely, we distinguish each depth
candidate’s reliability according to its semantic consistency
among warped views. Fig. 1 illustrates the overall archi-
tecture of our framework. Our intuition is that two pixels
should share the same semantic label if they are projections
of the same 3D point. We argue that selecting depth candi-
dates according to the semantic tags results in various ben-
efits:
1.Consistent: A 3D point may have distinct colors when
observing from different perspectives, but its semanticlabel never changes.
2.Efficient: Semantic estimation requires a meager cost.
A semantic segmentation model may spend only mil-
liseconds on each frame [11,47] while evaluating each
cost volume in coarse layers generally takes ten times
more cost. In addition, most hierarchical methods
shrink the depth range only by half in each stage, while
our method may significantly reduce the depth range
on delicate objects (e.g., desk legs).
3.Dense: Tiny objects always get missing in hierarchical
frameworks because the initial samples are too sparse.
However, the semantic maps are highly dense, thus re-
taining fine details.
4.Adaptive: Semantics indicate various properties of
the pixel. For example, an electric cable may demand
highly dense samples to recover, but a wall may not. In
other words, we can assign each pixel the most suitable
sampling density according to its semantic estimation.
Depth Reassignment: A bed pixel may have much more
valid depth candidates than a pen pixel after the semantic-
based pruning because there are likely many bed pixels in
other views. Consequently, we cannot form a regular cost
volume based on the pruned depth candidates because pix-
els may have different numbers of valid depth candidates.
However, most GPUs are not designed for irregular inputs
12577
and are inefficient in such cases. Thus, we further propose
reallocating depth samples from pixels with excess valid
candidates to those in the opposite. In this way, all pix-
els finally have the same number of candidates and thus can
be conveniently computed by GPUs. Moreover, we sam-
ple more densely for delicate structures, otherwise more
sparsely, because tiny objects have a narrower depth range
after semantic pruning, and all pixels have the same depth
number.
To summarize, this paper has two significant contribu-
tions:
1. Instead of densely evaluating all depths or sparsely
evaluating limited depths, we efficiently select the
most promising depth candidates based on the cross-
view semantic consistency.
2. We reallocate depth samples among pixels to make our
model adaptive to objects and efficient on GPUs.
The mentioned contributions significantly benefit the ef-
fectiveness while retaining efficiency. Our extensive exper-
iments on the most popular 3D reconstruction datasets [5]
further verify our proposed method’s validity.
|
Wang_Consistent-Teacher_Towards_Reducing_Inconsistent_Pseudo-Targets_in_Semi-Supervised_Object_Detection_CVPR_2023 | Abstract
In this study, we dive deep into the inconsistency of
pseudo targets in semi-supervised object detection (SSOD).
Our core observation is that the oscillating pseudo-targets
undermine the training of an accurate detector. It injects
noise into the student’s training, leading to severe overfit-
ting problems. Therefore, we propose a systematic solu-
tion, termed Consistent-Teacher , to reduce the in-
consistency. First, adaptive anchor assignment (ASA) sub-
stitutes the static IoU-based strategy, which enables the
student network to be resistant to noisy pseudo-bounding
boxes. Then we calibrate the subtask predictions by de-
signing a 3D feature alignment module (FAM-3D). It allows
each classification feature to adaptively query the optimal
feature vector for the regression task at arbitrary scales
and locations. Lastly, a Gaussian Mixture Model (GMM)
dynamically revises the score threshold of pseudo-bboxes,
which stabilizes the number of ground truths at an early
stage and remedies the unreliable supervision signal dur-
ing training. Consistent-Teacher provides strong re-
sults on a large range of SSOD evaluations. It achieves
40.0 mAP with ResNet-50 backbone given only 10% of an-
notated MS-COCO data, which surpasses previous base-
lines using pseudo labels by around 3 mAP . When trained on
fully annotated MS-COCO with additional unlabeled data,
the performance further increases to 47.7 mAP . Our code
is available at https://github.com/Adamdad/
ConsistentTeacher .
| 1. Introduction
The goal of semi-supervised object detection (SSOD) [3,
5, 12, 12, 13, 17, 24, 25, 30, 36, 43, 44] is to facilitate the
training of object detectors with the help of a large amount
*Equally contributed.
‡Work done during internship at SenseTime.
†Work done during internship at Shanghai AI Laboratory.of unlabeled data. The common practice is first to train a
teacher model on the labeled data and then generate pseudo
labels and boxes on unlabeled sets, which act as the ground
truth (GT) for the student model. Student detectors, on
the other hand, are anticipated to make consistent predic-
tions regardless of network stochasticity [35] or data aug-
mentation [12, 30]. In addition, to improve pseudo-label
quality, the teacher model is updated as a moving aver-
age [24, 36, 44] of the student parameters.
In this study, we point out that the performance of semi-
supervised detectors is still largely hindered by the incon-
sistency in pseudo-targets. Inconsistency means that the
pseudo boxes may be highly inaccurate and vary greatly at
different stages of training. As a consequence, inconsistent
oscillating bounding boxes (bbox) bias SSOD predictions
with accumulated error. Different from semi-supervised
classification, SSOD has one extra step of assigning a set
of pseudo-bboxes to each RoI/anchor as dense supervision.
Common two-stage [24, 30, 36] and single-stage [4, 42]
SSOD networks adopt static criteria for anchor assignment,
e.g. IoU score or centerness. It is observed that the static
assignment is sensitive to noise in the bounding boxes pre-
dicted by the teacher, as a small perturbation in the pseudo-
bboxes might greatly affect the assignment results. It thus
leads to severe overfitting on unlabeled images.
To verify this phenomenon, we train a single-stage de-
tector with standard IoU-based assignment on MS-COCO
10% data. As shown in Fig. (1), a small change in the
teacher’s output results in strong noise in the boundaries
of pseudo-bboxes, causing erroneous targets to be associ-
ated with nearby objects under static IoU-based assignment.
This is because some inactivated anchors are falsely as-
signed positive in the student network. Consequently, the
network overfits as it produces inconsistent labels for neigh-
boring objects. The overfitting is also observed in the clas-
sification loss curve on unlabeled images1.
1All GT bboxes on unlabeled data are only used to calculate the loss
value but not for updating the parameters.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3240
❌Mean-Teacher:InconsistentAssignmentand Drifting Pseudo Labels
✅Ours: Consistent and Accurate Targets T=100KT=104KT=180KT=100KT=104KT=180KFigure 1. Illustration of inconsistency problem in SSOD on COCO 10 % evaluation. (Left) We compare the training losses between the
Mean-Teacher and our Consistent-Teacher . In Mean-Teacher, inconsistent pseudo targets lead to overfitting on the classification
branch, while regression losses become difficult to converge. In contrast, our approach sets consistent optimization objectives for the stu-
dents, effectively balancing the two tasks and preventing overfitting. (Right) Snapshots for the dynamics of pseudo labels and assignment.
The Green and Red bboxes refer to the ground-truth and pseudo bbox, respectively, for the polar bear. Red dots are the assigned anchor
boxes for the pseudo label. The heatmap indicates the dense confidence score predicted by the teacher (brighter the larger). A nearby board
is finally misclassified as a polar bear in the baseline while our adaptive assignment prevents overfitting.
Through dedicated investigation, We find that one im-
portant factor that leads to the drifting pseudo-label is the
mismatch between classification and regression tasks. Typ-
ically, only the classification score is used to filter pseudo-
bboxes in SSOD. However, confidence does not always in-
dicate the quality of the bbox [36]. Two anchors with sim-
ilar scores, as a result, can have significantly different pre-
dicted pseudo-bboxes, leading to more false predictions and
label drifting. Such phenomenon is illustrated in Fig. (1)
with the varying pseudo-bboxes of the MeanTeacher around
T= 104 K. Therefore, the mismatch between the quality
of a bbox and its confidence score would result in noisy
pseudo-bboxes, which in turn exacerbates the label drifting.
The widely-employed hard threshold scheme also causes
threshold inconsistencies in pseudo labels. Traditional
SSOD methods [24,30,36] utilize a static threshold on con-
fidence score for student training. However, the thresh-
old serves as a hyper-parameter, which not only needs to
be carefully tuned but should also be dynamically adjusted
in accordance with the model’s capability at different time
steps. In the Mean-Teacher [32] paradigm, the number of
pseudo-bboxes may increase from too few to too many un-
der a hard threshold scheme, which incurs inefficient and
biased supervision for the student.
Therefore, we propose Consistent-Teacher in this
study to address the inconsistency issues. First, we find that
a simple replacement of the static IoU-based anchor assign-
ment by cost-aware adaptive sample assignment (ASA) [10,11] greatly alleviates the effect of inconsistency in dense
pseudo-targets. During each training step, we calculate the
matching cost between each pseudo-bbox with the student
network’s predictions. Only feature points with the lowest
costs are assigned as positive. It reduces the mismatch be-
tween the teacher’s high-response features and the student’s
assigned positive pseudo targets, which inhibits overfitting.
Then, we calibrate the classification and regression tasks
so that the teacher’s classification confidence provides a
better proxy of the bbox quality. It produces consistent
pseudo-bboxes for anchors of similar confidence scores,
and thus the oscillation in pseudo-bbox boundaries is re-
duced. Inspired by TOOD [9], we propose a 3-D feature
alignment module (FAM-3D) that allows classification fea-
tures to sense and adopt the best feature in its neighbor-
hood for regression. Different from the single scale search-
ing, FAM-3D reorders the features pyramid for regression
across scales as well. In this way, a unified confidence score
accurately measures the quality of classification and regres-
sion with the improved alignment module and ultimately
brings consistent pseudo-targets for the student in SSOD.
As for the threshold inconsistency in pseudo-bboxes,
we apply Gaussian Mixture Model (GMM) to generate an
adaptive threshold for each category during training. We
consider the confidence scores of each class as the weighted
sum of positive and negative distributions and predict the
parameters of each Gaussian with maximum likelihood es-
timation. It is expected that the model will be able to adap-
3241
tively infer the optimal threshold at different training steps
so as to stabilize the number of positive samples.
The proposed Consistent-Teacher greatly sur-
passes current SSOD methods. Our approach reaches 40.0
mAP with 10% of labeled data on MS-COCO, which is ˜3
mAP ahead of the state-of-the-art [43]. When using the
100% labels together with extra unlabeled MS-COCO data,
the performance is further boosted to 47.7 mAP. The effec-
tiveness of Consistent-Teacher is also testified on
other ratios of labeled data and on other datasets as well.
Concretely, the paper contributes in the following aspects.
• We provide the first in-depth investigation of the incon-
sistent target problem in SSOD, which incurs severe
overfitting issues.
• We introduce an adaptive sample assignment to sta-
bilize the matching between noisy pseudo-bboxes and
anchors, leading to robust training for the student.
• We develop a 3-D feature alignment module (FAM-
3D) to calibrate the classification confidence and
regression quality, which improves the quality of
pseudo-bboxes.
• We adopt GMM to flexibly determine the threshold
for each class during training. The adaptive threshold
evolves through time and reduces the threshold incon-
sistencies for SSOD.
•Consistent-Teacher achieves compelling im-
provement on a wide range of evaluations and serves
as a new solid baseline for SSOD.
|
Wang_Spatial-Frequency_Mutual_Learning_for_Face_Super-Resolution_CVPR_2023 | Abstract
Face super-resolution (FSR) aims to reconstruct high-
resolution (HR) face images from the low-resolution (LR)
ones. With the advent of deep learning, the FSR technique
has achieved significant breakthroughs. However, existing
FSR methods either have a fixed receptive field or fail to
maintain facial structure, limiting the FSR performance. To
circumvent this problem, Fourier transform is introduced,
which can capture global facial structure information and
achieve image-size receptive field. Relying on the Fourier
transform, we devise a spatial-frequency mutual network
(SFMNet) for FSR, which is the first FSR method to ex-
plore the correlations between spatial and frequency do-
mains as far as we know. To be specific, our SFMNet is
a two-branch network equipped with a spatial branch and
a frequency branch. Benefiting from the property of Fourier
transform, the frequency branch can achieve image-size re-
ceptive field and capture global dependency while the spa-
tial branch can extract local dependency. Considering that
these dependencies are complementary and both favorable
for FSR, we further develop a frequency-spatial interac-
tion block (FSIB) which mutually amalgamates the com-
plementary spatial and frequency information to enhance
the capability of the model. Quantitative and qualitative
experimental results show that the proposed method out-
performs state-of-the-art FSR methods in recovering face
images. The implementation and model will be released at
https://github.com/wcy-cs/SFMNet .
| 1. Introduction
Face super-resolution (FSR), also known as face halluci-
nation, is a technology which can transform low-resolution
(LR) face images into the corresponding high-resolution
(HR) ones. Limited by low-cost cameras and imaging con-
ditions, the obtained face images are always low-quality,
resulting in a poor visual effect and deteriorating the down-
stream tasks, such as face recognition, face attribute analy-
*Corresponding author.
(a) (b) (c) (d)(e)
Figure 1. Decomposition and reconstruction of face image in the
frequency domain. (a) denote face images; (b) are their amplitude
spectrum; (c) show their phase spectrum; (d) present the recon-
structed images with amplitude information only; (e) are the re-
constructed images with phase information only.
sis, face editing, etc. Therefore, FSR has become an emerg-
ing scientific tool and has gained more of the spotlight in the
computer vision and image processing communities [20].
FSR is an ill-posed challenging problem. In contrast to
general image super-resolution, FSR only focuses on the
face images and is tasked with recovering pivotal facial
structures. The first FSR method proposed by Baker and
Kanade [1] sets off the upsurge of traditional FSR methods.
These traditional methods mainly resort to PCA [6], convex
optimization [23], Bayesian approach [42] and manifold
learning [19] to improve the quality of face images. Nev-
ertheless, they are still incompetent in recovering plausible
face images due to their limited representation abilities. In
recent years, FSR has made a dramatic leap, benefiting from
the advent of deep learning [20]. Researchers develop var-
ious network frameworks to learn the transformation from
LR face images to the corresponding HR ones, including
single-task learning frameworks [5, 8, 17], multi-task learn-
ing frameworks [4,9,32,51], etc., which has greatly pushed
forward the frontier of FSR research.
Although existing FSR methods improve FSR perfor-
mance, they still have limitations to be tackled. Face im-
age has global facial structure which plays an important
role in transforming LR face images into the correspond-
ing HR ones. However, the actual receptive field of the
convolutional neural network is limited due to the vanish-
ing gradient problem, failing to model global dependency.
To achieve large receptive field, transformer has been ap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22356
plied in computer vision tasks [37, 54]. The self-attention
mechanism among every patch can model long-range de-
pendency, but it usually has a high demand for both training
data and computation resource. In addition, the partition
strategy may also destruct the structure of the facial image.
Therefore, an effective FSR method that can achieve image-
size receptive field and maintain the facial structure is an ur-
gent demand. To meet this need, frequency information is
introduced. It is well-accepted that features (for each pixel
or position) in frequency domain can achieve image-size re-
ceptive field and naturally have the ascendency of captur-
ing global dependency [33], and this can well complement
the local facial features extracted in the spatial domain. To
obtain frequency information, Fourier transform is adopted
to decompose the image into the amplitude component and
the phase component, which can well characterize the facial
structure information. As shown in Fig. 1, the image recon-
structed with the phase component reveals clear facial struc-
tural information which is lost in the LR face images. Natu-
rally, the phase component of the Fourier transform contains
key missing information that is critical for FSR task.
Based on the above analysis, we propose a novel spatial-
frequency mutual network (SFMNet) for FSR, which ex-
plores the incorporation between spatial and frequency do-
mains. The SFMNet is a two-branch network, including
a frequency branch and a spatial branch. The frequency
branch is tasked with capturing global facial structure by
the Fourier transform, while the spatial branch is tailored
for extracting local facial features. The global information
in frequency domain and the local information in spatial do-
main are complementary, and both of them can enhance the
representation ability of the model. In light of this, we care-
fully design a frequency-spatial interaction block (FSIB) to
mutually fuse frequency and spatial information to boost
FSR performance. Based on the SFMNet, we also develop
a GAN-based model with a spatial discriminator and a fre-
quency discriminator to guide the learning of the model in
both spatial and frequency domains, which can further force
the SFMNet to produce more high-frequency information.
Overall, the contributions of our work are three-fold:
i) We develop a spatial-frequency mutual network for
face super-resolution, to the best of our knowledge, this is
the first method that explores the potential of both spatial
and frequency information for face super-resolution.
ii) We carefully design a frequency-spatial interaction
block to mutually fuse global frequency information and
local spatial information. Thanks to its powerful modeling
ability, the complementary information contained in spatial
and frequency domains can be fully explored and utilized.
iii) We conduct experiments to verify the superiority of
the proposed method. Experimental results on two widely
used benchmark datasets ( i.e,CelebA [30] and Helen [25])
demonstrate that our method achieves the best performancein terms of visual results and quantitative metrics.
|
Vidit_CLIP_the_Gap_A_Single_Domain_Generalization_Approach_for_Object_CVPR_2023 | Abstract
Single Domain Generalization (SDG) tackles the prob-
lem of training a model on a single source domain so that
it generalizes to any unseen target domain. While this has
been well studied for image classification, the literature on
SDG object detection remains almost non-existent. To ad-
dress the challenges of simultaneously learning robust ob-
ject localization and representation, we propose to leverage
a pre-trained vision-language model to introduce semantic
domain concepts via textual prompts. We achieve this via
a semantic augmentation strategy acting on the features ex-
tracted by the detector backbone, as well as a text-based
classification loss. Our experiments evidence the benefits of
our approach, outperforming by 10% the only existing SDG
object detection method, Single-DGOD [52], on their own
diverse weather-driving benchmark.
| 1. Introduction
As for most machine learning models, the performance
of object detectors degrades when the test data distribu-
tion deviates from the training data one. Domain adap-
tation techniques [3, 5, 8, 33, 44, 46] try to alleviate this
problem by learning domain invariant features between a
source and a known target domain. In practice, however,
it is not always possible to obtain target data, even un-
labeled, precluding the use of such techniques. Domain
generalization tackles this by seeking to learn representa-
tions that generalize to any target domain. While early
approaches [1, 10, 28, 29, 31, 50, 60] focused on the sce-
nario where multiple source domains are available during
training, many recent methods tackle the more challenging,
yet more realistic, case of Single Domain Generalization
(SDG), aiming to learn to generalize from a single source
dataset. While this has been well studied for image clas-
sification [14, 38, 48, 51, 59], it remains a nascent topic in
object detection. To the best of our knowledge, a single ex-
isting approach, Single-DGOD [52], uses disentanglement
and self-distillation [25] to learn domain-invariant features.
In this paper, we introduce a fundamentally different ap-
Figure 1. Semantic Augmentation: We compare the PCA pro-
jections of CLIP [39] image embeddings obtained in two different
manners: (Top) The embeddings were directly obtained from the
real images from 5 domains corresponding to different weather
conditions. (Bottom) The embeddings were obtained from the day
images only and modified with our semantic augmentation strat-
egy based on text prompts to reflect the other 4 domains. Note that
the relative positions of the clusters in the bottom plot resembles
that of the top one, showing that our augmentations let us gener-
alize to different target domains. The principal components used
are the same for both the figures.
proach to SDG for object detection. To this end, we build
on two observations: (i) Unsupervised/self-supervised pre-
training facilitates the transfer of a model to new tasks [2,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3219
4, 20]; (ii) Exploiting language supervision to train vision
models allows them to generalize more easily to new cat-
egories and concepts [9, 39]. Inspired by this, we there-
fore propose to leverage a self-supervised vision-language
model, CLIP [39], to guide the training of an object detec-
tor so that it generalizes to unseen target domains. Since the
visual CLIP representation has been jointly learned with the
textual one, we transfer text-based domain variations to the
image representation during training, thus increasing the di-
versity of the source data.
Specifically, we define textual prompts describing po-
tential target domain concepts, such as weather and day-
time variations for road scene understanding, and use these
prompts to perform semantic augmentations of the images.
These augmentations, however, are done in feature space,
not in image space, which is facilitated by the joint image-
text CLIP latent space. This is illustrated in Fig. 1, which
shows that, even though we did not use any target data
for semantic augmentation, the resulting augmented embed-
dings reflect the distributions of the true image embeddings
from different target domains.
We show the effectiveness of our method on the SDG
driving dataset of [52], which reflects a practical scenario
where the training (source) images were captured on a
clear day whereas the test (target) ones were acquired in
rainy, foggy, night, and dusk conditions. Our experiments
demonstrate the benefits of our approach over the Single-
DGOD [52] one.
To summarize our contributions, we employ a vision-
language model to improve the generalizability of an object
detector; during training, we introduce domain concepts via
text-prompts to augment the diversity of the learned image
features and make them more robust to an unseen target do-
main. This enables us to achieve state-of-the-art results on
the diverse weather SDG driving benchmark of [52]. Our
implementation can be accessed through the following url:
https://github.com/vidit09/domaingen.
|
Wei_Adaptive_Graph_Convolutional_Subspace_Clustering_CVPR_2023 | Abstract
Spectral-type subspace clustering algorithms have
shown excellent performance in many subspace clustering
applications. The existing spectral-type subspace cluster-
ing algorithms either focus on designing constraints for the
reconstruction coefficient matrix or feature extraction meth-
ods for finding latent features of original data samples. In
this paper, inspired by graph convolutional networks, we
use the graph convolution technique to develop a feature
extraction method and a coefficient matrix constraint si-
multaneously. And the graph-convolutional operator is up-
dated iteratively and adaptively in our proposed algorithm.
Hence, we call the proposed method adaptive graph con-
volutional subspace clustering (AGCSC). We claim that,
by using AGCSC, the aggregated feature representation of
original data samples is suitable for subspace clustering,
and the coefficient matrix could reveal the subspace struc-
ture of the original data set more faithfully. Finally, plenty
of subspace clustering experiments prove our conclusions
and show that AGCSC1outperforms some related methods
as well as some deep models.
| 1. Introduction
Subspace clustering has become an attractive topic in
machine learning and computer vision fields due to its suc-
cess in a variety of applications, such as image processing
[14, 48], motion segmentation [6, 17], and face clustering
[27]. The goal of subspace clustering is to arrange the high-
dimensional data samples into a union of linear subspaces
where they are generated [1, 25, 36]. In the past decades,
different types of subspace clustering algorithms have been
proposed [3, 11, 23, 37, 44]. Among them, spectral-type
subspace clustering methods have shown promising perfor-
mance in many real-world tasks.
1We present the codes of AGCSC and the evaluated algorithms on
https://github.com/weilyshmtu/AGCSC .Suppose a data matrix X= [x1;x2;···;xn]∈ Rn×d
contains ndata samples drawn from ksubspaces, and d
is the number of features. The general formulation of a
spectral-type subspace clustering algorithm could be ex-
pressed as follows:
minCΩ
Φ(X)−CΦ(X)
+λΨ(C), (1)
where Φ(·)is a function that is used to find the meaningful
latent features for original data samples. It could be either a
linear or a non-linear feature extraction method [26,42,45],
or even a deep neural network [27]. C∈ Rn×nis the re-
construction coefficient matrix and Ψ(C)is usually some
kind of constraint of C. In addition, Ω(·)is a function
to measure the reconstruction residual, and λis a hyper-
parameter. After Cis obtained, an affinity graph Ais de-
fined as A= (|C|+|C⊤|)/2, where C⊤is the transpose of
C. Then a certain spectral clustering, e.g. Normalized cuts
(Ncuts) [33] is used to produce the final clustering results.
Classical spectral-type subspace clustering algorithms
mainly focus on designing Ψ(C)to help Cto carry certain
characteristics and hope Ccan reveal the intrinsic struc-
ture of the original data set. For example, sparse subspace
clustering (SSC) [6] lets Ψ(C) =∥C∥1, which makes C
a sparse matrix. In low-rank representation (LRR) [17],
Ψ(C)is the nuclear norm of Cwhich helps discover the
global structure of a data set. Least square regression (LSR)
[20] aims to find a dense reconstruction coefficient matrix
by setting Ψ(C) =∥C∥2
F. Block diagonal representation
(BDR) [19] makes Ψ(C)ak-block diagonal regularizer to
pursue a k-block diagonal coefficient matrix.
Recently, deep subspace clustering algorithms (DSCs)
reported much better results than the classical spectral-type
subspace clustering algorithms. The main difference be-
tween (DSCs) and the classical spectral-type subspace clus-
tering algorithms is that DSCs use deep auto-encoders to
extract latent features from original data [21, 27, 29]. But it
is pointed out that the success of DSCs may be attributed to
the usage of an ad-hoc post-processing strategy [8]. Though
the rationality of the existing DSCs is required further dis-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6262
cussion, some deep learning techniques are still worthy of
being introduced into spectral-type subspace clustering al-
gorithms.
In this paper, inspired by graph convolutional networks
[4, 12, 40], we explore the problem of using graph convo-
lutional techniques to design the feature extraction function
Θ(·)and the constraint function Ψ(·)simultaneously. Dif-
ferent from the existing graph convolution methods which
need a predefined affinity graph, we apply the required co-
efficient matrix Cto construct a graph convolutional oper-
ator. So the graph convolutional operator will be updated
adaptively and iteratively in the proposed subspace clus-
tering algorithm. Consequently, on one hand, by applying
the adaptive graph convolutional operator, the aggregated
feature representations of original data samples in the same
subspace will be gathered closer, and those in different sub-
spaces will be separated further. On the other hand, in the
obtained coefficient matrix, the coefficients corresponding
to different data samples will also have a similar charac-
teristic to the samples’ feature representations, so it could
reveal the intrinsic structure of data sets more accurately.
The overview pipeline of the proposed method is illustrated
in Fig. 1.
… …
𝐗 𝐂 𝐒
Original data Coefficient matrix Graph convolutional operator
Figure 1. The overview of the proposed method. The graph con-
volutional operator Swill be updated iteratively based on C. And
the updated Swill in turn affect the computation of Cand feature
aggregation.
|
Wu_Boosting_Detection_in_Crowd_Analysis_via_Underutilized_Output_Features_CVPR_2023 | Abstract
Detection-based methods have been viewed unfavorably
in crowd analysis due to their poor performance in dense
crowds. However, we argue that the potential of these meth-
ods has been underestimated, as they offer crucial informa-
tion for crowd analysis that is often ignored. Specifically,
the area size and confidence score of output proposals and
bounding boxes provide insight into the scale and density
of the crowd. To leverage these underutilized features, we
propose Crowd Hat, a plug-and-play module that can be
easily integrated with existing detection models. This mod-
ule uses a mixed 2D-1D compression technique to refine
the output features and obtain the spatial and numerical
distribution of crowd-specific information. Based on these
features, we further propose region-adaptive NMS thresh-
olds and a decouple-then-align paradigm that address the
major limitations of detection-based methods. Our exten-
sive evaluations on various crowd analysis tasks, including
crowd counting, localization, and detection, demonstrate
the effectiveness of utilizing output features and the poten-
tial of detection-based methods in crowd analysis. Our code
is available at https://github.com/wskingdom/
Crowd-Hat .
| 1. Introduction
Crowd analysis is a critical area in computer vision due
to its close relation with humans and its wide range of appli-
cations in public security, resource scheduling, crowd mon-
itoring [18, 28, 33]. This field can be divided into three
concrete tasks: crowd counting [12, 17, 27], crowd local-
ization [1, 23, 26], and crowd detection [16, 20, 30]. While
most existing methods mainly focus on the first two tasks
due to the extreme difficulty of detecting dense crowds,
simply providing the number of the crowd or representing
each person with a point is insufficient for the growing real-
world demand. Crowd detection, which involves localiz-
ing each person with a bounding box, supports more down-
stream tasks, such as crowd tracking [24] and face recogni-
* Indicates equal contribution
1D Conv 2D ConvCrowd Count
Local
FeatureGlobal
FeatureOutput Feature Compression Decouple-then-Align
Region-adaptive
NMS thresholdsFigure 1. Our approach involves extracting output features from
the detection outputs and refining them into 2D compressed matri-
ces and 1D distribution vectors. These features are then encoded
into local and global feature vectors to regress region-adaptive
NMS thresholds and the crowd count. We select the final output
bounding boxes using the decouple-then-align paradigm.
tion [35]. Therefore, constructing a comprehensive crowd
analysis framework that addresses all three tasks is essential
to meet real-world demands.
Although object detection may seem to meet the de-
mands above, most relevant research views it pessimisti-
cally, especially in dense crowds [12, 23, 27, 28, 37]. First
and foremost, compared to general object detection sce-
narios with bounding box annotations, most crowd analy-
sis datasets only provide limited supervision in the form
of point annotations. As a result, detection methods are
restricted to using pseudo bounding boxes generated from
point labels for training [16, 20, 30]. However, the inferior
quality of these pseudo bounding boxes makes it difficult
for neural networks to obtain effective supervision [1, 23].
Secondly, the density of crowds varies widely among
images, ranging from zero to tens of thousands [6, 20, 22,
28, 38], and may vary across different regions in the same
image, presenting a significant challenge in choosing the
allowed overlapping region in Non-Maximum-Suppression
(NMS). A fixed NMS threshold is often considered a hy-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15609
perparameter, but it yields a large number of false positives
within low-density crowds and false negatives within high-
density crowds, as criticized in [23].
Thirdly, current detection methods adopt the detection-
counting paradigm [17, 27] for crowd counting, where the
number of humans is counted by the bounding boxes ob-
tained from the detection output. However, the crowd de-
tection task is extremely challenging without bounding box
labels for training, resulting in a large number of mislabeled
and unrecognized boxes in crowds [23,28]. This inaccurate
detection result makes the detection-counting paradigm per-
form poorly, yielding inferior counting results compared to
density-based methods.
In this paper, we ask: Has the potential of object de-
tection been fully discovered? Our findings suggest that
crucial information from detection outputs, such as the size
and confidence score of proposals and bounding boxes, are
largely disregarded. This information can provide valuable
insights into crowd-specific characteristics. For instance, in
dense crowds, bounding boxes tend to be smaller with lower
confidence scores due to occlusion, while sparse crowds
tend to produce boxes with higher confidence scores.
To this end, we propose the a module on top of the Head
of detection pipeline to leverage these underutilized detec-
tion outputs. We name this module as “Crowd Hat” be-
cause it can be adapted to different detection methods, just
as a hat can be easily put on different heads. Specifically, we
first introduce a mixed 2D-1D compression to refine both
spatial and numerical distribution of output features from
the detection pipeline. We further propose a NMS decoder
to learn region-adaptive NMS thresholds from these fea-
tures, which effectively reduces false positives under low-
density regions and false negatives under high-density re-
gions. Additionally, we use a decouple-then-align paradigm
to improve counting performance by regressing the crowd
count from output features and using this predicted count
to guide the bounding box selection. Our Crowd Hat
module can be integrated into various one-stage and two-
stage object detection methods, bringing significant perfor-
mance improvements for crowd analysis tasks. Extensive
experiments on crowd counting, localization and detection
demonstrate the effectiveness of our proposed approach.
Overall, the main contributions of our work can be sum-
marized as the following:
• To the best of our knowledge, we are the first to con-
sider detection outputs as valuable features in crowd
analysis and propose the mixed 2D-1D compression to
refine crowd-specific features from them.
• We introduce region-adaptive NMS thresholds and a
decouple-then-align paradigm to mitigate major draw-
backs of detection-based methods in crowd analysis.
• We evaluate our method on public benchmarks of
crowd counting, localization, and detection tasks,showing our method can be adapted to different de-
tection methods while achieving better performance.
|
Xie_DINER_Disorder-Invariant_Implicit_Neural_Representation_CVPR_2023 | Abstract
Implicit neural representation (INR) characterizes the
attributes of a signal as a function of corresponding coor-
dinates which emerges as a sharp weapon for solving in-
verse problems. However , the capacity of INR is limited bythe spectral bias in the network training. In this paper , we
find that such a frequency-related problem could be largely
solved by re-arranging the coordinates of the input signal,for which we propose the disorder-invariant implicit neu-
ral representation (DINER) by augmenting a hash-table toa traditional INR backbone. Given discrete signals sharingthe same histogram of attributes and different arrangement
orders, the hash-table could project the coordinates into the
same distribution for which the mapped signal can be better
modeled using the subsequent INR network, leading to sig-
nificantly alleviated spectral bias. Experiments not only re-
veal the generalization of the DINER for different INR back-bones (MLP vs. SIREN) and various tasks (image/videorepresentation, phase retrieval, and refractive index recov-ery) but also show the superiority over the state-of-the-
art algorithms both in quality and speed. Project page:
http s:// ezio77.github.io/DINER-website/
| 1. Introduction
INR [ 34] continuously describes a signal, providing the
advantages of Nyquist-sampling-free scaling, interpolation,
and extrapolation without requiring the storage of additionalsamples [ 20]. By combining it with differentiable physical
mechanisms such as the ray-marching rendering [ 12,22],
Fresnel diffraction propagation [ 44] and partial differential
equations [ 11], INR becomes a universal and sharp weapon
for solving inverse problems and has achieved significant
progress in various scientific tasks, e.g. , the novel view syn-
thesis [ 39], intensity diffraction tomography [ 19] and mul-
∗The work was supported in part by National Key Research and
Development Project of China (2022YFF0902402) and NSFC (Grants62025108, 62101242, 62071219).0 500 1000 1500 2000 2500 3000
Epochs5101520253035404550PSNR (dB)
DINER(SIREN)
DINER(MLP)
SIREN
InstantNGP
PE+MLP
Figure 1. PSNR of various INRs on 2D image fitting over different
training epochs.
tiphysics simulation [ 11].
However, the capacity of INR is often limited by the un-
derlying network model itself. For example, the spectralbias [ 28] usually makes the INR easier to represent low-
frequency signal components (see Fig. 1and Fig. 2(c)). To
improve the representation capacity of the INR model, pre-vious explorations mainly focus on encoding more frequen-cy bases using either Fourier basis [ 22,34,38,42] or wavelet
basis [ 6,17] into the network. However, the length of func-
tion expansion is infinite in theory, and a larger model withmore frequency bases runs exceptionally slowly.
Such a problem is closely related to the input signal’s
frequency spectrum. The signal’s frequency tells how fastthe signal attribute changes following the intrinsic order ofgeometry coordinates. By properly re-arranging the orderof coordinates of a discrete signal, we could modulate the
signal’s frequency spectrum to possess more low-frequency
components. Such a re-arranged signal can be better mod-eled using subsequent INR. Based on this observation, we
propose the DINER. In DINER, the input coordinate is first
mapped to another index using a hash-table and fed into a
traditional INR backbone. We prove that no matter what
orders the elements in the signal are arranged, the join-
t optimization of the hash-table and the network parameters
guarantees the same mapped signal (Fig. 2(d)) with more
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6143
Baboon
(a) GT (c) Results by MLP (d) Hash-mapped Input
19.55 dB
19.55 dB
22.60 dB
22.60 dB
13.48 dB
13.48 dB
xy
xy
xy
x
x
x
x
y
y
y
yxy
xy
xy
x
x
x
x
y
y
y
ySorted Baboon Random Baboon
(b) Spectrum of GT
(f) Learned INR in DINER (g) Spectrum of (f) (e) Results by DINER
31.05 dB
31.05 dB
31.08 dB
31.08 dB
31.03 dB
31.03 dB
(e) Results by DINER
31.05 dB
31.08 dB
31.03 dB
xy
xy
xy
x
x
x
x
y
y
y
y
Figure 2. Comparisons of the existing INR and DINER for representing Baboon with different arrangements. From top to bottom, pixels
in the Baboon are arranged in different geometric orders, while the histogram is not changed (the right-bottom panel in (a)). From left
to right, (a) and (b) refer to the ground truth image and its Fourier spectrum. (c) contains results by an MLP with positional encoding
(PE+MLP) [ 38] at a size of 2×64. (d) refers to the hash-mapped coordinates. (e) refers to the results of the DINER that uses the same-size
MLP as (c). (f) refers to the learned INR in DINER by directly feeding grid coordinates to the trained MLP (see Sec. 4.1 for more details).
(g) is the Fourier spectrum of (f). (g) shares the same scale bar with (b).
low-frequency components. As a result, the representation
capacity of the existing INR backbones and the task perfor-
mance are largely improved. As in Fig. 1and Fig. 2(e), a
tiny shallow and narrow MLP-based INR could well char-
acterize the input signal with arbitrary arrangement orders.The use of hash-table trades the storage for fast computation
where the caching of its learnable parameters for mapping
is usually at a similar size as the input signal, but the com-
putational cost is marginal since the back-propagation forthe hash-table derivation is O(1).
Main contributions are summarized as follows:
1. The inferior representation capacity of the existing IN-
R model is largely increased by the proposed DINER,in which a hash-table is augmented to map the coordi-
nates of the original input for better characterization in
the succeeding INR model.
2. The proposed DINER provides consistent mapping
and representation capacity for signals sharing thesame histogram of attributes and different arrangement
orders.
3. The proposed DINER is generalized in various tasks,
including the representation of 2D images and 3D
videos, phase retrieval in lensless imaging, as well as3D refractive index recovery in intensity diffraction to-mography, reporting significant performance gains tothe existing state-of-the-arts. |
Weng_PersonNeRF_Personalized_Reconstruction_From_Photo_Collections_CVPR_2023 | Abstract
We present PersonNeRF , a method that takes a collec-
tion of photos of a subject ( e.g. Roger Federer) captured
across multiple years with arbitrary body poses and ap-
pearances, and enables rendering the subject with arbitrary
novel combinations of viewpoint, body pose, and appear-
ance. PersonNeRF builds a customized neural volumetric
3D model of the subject that is able to render an entire space
spanned by camera viewpoint, body pose, and appearance.
A central challenge in this task is dealing with sparse ob-
servations; a given body pose is likely only observed by
a single viewpoint with a single appearance, and a givenappearance is only observed under a handful of different
body poses. We address this issue by recovering a canon-
ical T-pose neural volumetric representation of the subject
that allows for changing appearance across different ob-
servations, but uses a shared pose-dependent motion field
across all observations. We demonstrate that this approach,
along with regularization of the recovered volumetric ge-
ometry to encourage smoothness, is able to recover a model
that renders compelling images from novel combinations of
viewpoint, pose, and appearance from these challenging un-
structured photo collections, outperforming prior work for
free-viewpoint human rendering.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
524
| 1. Introduction
We present a method for transforming an unstructured
personal photo collection, containing images spanning mul-
tiple years with different outfits, appearances, and body
poses, into a 3D representation of the subject. Our system,
which we call PersonNeRF, enables us to render the subject
under novel unobserved combinations of camera viewpoint,
body pose, and appearance.
Free-viewpoint rendering from unstructured photos is a
particularly challenging task because a photo collection can
contain images at different times where the subject has dif-
ferent clothing and appearance. Furthermore, we only have
access to a handful of images for each appearance, so it is
unlikely that all regions of the body would be well-observed
for any given appearance. In addition, any given body pose
is likely observed from just a single or very few camera
viewpoints.
We address this challenging scenario of sparse viewpoint
and pose observations with changing appearance by mod-
eling a single canonical-pose neural volumetric represen-
tation that uses a shared motion weight field to describe
how the canonical volume deforms with changes in body
pose, all conditioned on appearance-dependent latent vec-
tors. Our key insight is that although the observed body
poses have different appearances across the photo collec-
tion, they should all be explained by a common motion
model since they all come from the same person. Further-
more, although the appearances of a subject can vary across
the photo collection, they all share common properties such
as symmetry so embedding appearance in a shared latent
space can help the model learn useful priors.
To this end, we build our work on top of Human-
NeRF [46], which is a state-of-the-art free-viewpoint hu-
man rendering approach that requires hundreds of images
of a subject without clothing or appearance changes. Along
with regularization, we extend HumanNeRF to account for
sparse observations as well as enable modeling diverse ap-
pearances. Finally, we build an entire personalized space
spanned by camera view, body pose, and appearance that
allows intuitive exploration of arbitrary novel combinations
of these attributes (as shown in Fig. 1).
|
Xu_HandsOff_Labeled_Dataset_Generation_With_No_Additional_Human_Annotations_CVPR_2023 | Abstract
Recent work leverages the expressive power of genera-
tive adversarial networks (GANs) to generate labeled syn-
thetic datasets. These dataset generation methods often
require new annotations of synthetic images, which forces
practitioners to seek out annotators, curate a set of synthetic
images, and ensure the quality of generated labels. We in-
troduce the HandsOff framework, a technique capable of
producing an unlimited number of synthetic images and cor-
responding labels after being trained on less than 50 pre-
existing labeled images. Our framework avoids the practi-
cal drawbacks of prior work by unifying the field of GAN in-
version with dataset generation. We generate datasets with
rich pixel-wise labels in multiple challenging domains such
as faces, cars, full-body human poses, and urban driving
scenes. Our method achieves state-of-the-art performance
in semantic segmentation, keypoint detection, and depth es-
timation compared to prior dataset generation approaches
and transfer learning baselines. We additionally showcase
its ability to address broad challenges in model develop-
ment which stem from fixed, hand-annotated datasets, such
as the long-tail problem in semantic segmentation. Project
page: austinxu87.github.io/handsoff .
| 1. Introduction
The strong empirical performance of machine learning
(ML) models has been enabled, in large part, by vast quan-
tities of labeled data. The traditional machine learning
paradigm, where models are trained with large amounts of
human labeled data, is typically bottlenecked by the signif-
icant monetary, time, and infrastructure investments needed
to obtain said labels. This problem is further exacerbated
when the data itself is difficult to collect. For example, col-
lecting images of urban driving scenes requires physical car
infrastructure, human drivers, and compliance with relevant
government regulations. Finally, collecting real labeled data
*Work done as an intern at Amazon. [email protected]
†Work done while at Amazon
Figure 1. The HandsOff framework uses a small number of exist-
ing labeled images and a generative model to produce infinitely
many labeled images.
can often lead to imbalanced datasets that are unrepresenta-
tive of the overall data distribution. For example, in long-
tail settings , the data used to train a model often does not
contain rare, yet crucial edge cases [39].
These limitations make collecting ever increasing
amounts of hand labeled data unsustainable. We advocate
for a shift away from the standard paradigm towards a world
where training data comes from an infinite collection of au-
tomatically generated labeled images. Such a dataset gen-
eration approach can allow ML practitioners to synthesize
datasets in a controlled manner, unlocking new model de-
velopment paradigms such as controlling the quality of gen-
erated labels and mitigating the long-tail problem.
In this work, we propose HandsOff, a generative adver-
sarial network (GAN) based dataset generation framework.
HandsOff is trained on a small number of existing labeled
images and capable of producing an infinite set of synthetic
images with corresponding labels (Fig. 1). To do so, we
unify concepts from two disparate fields: dataset genera-
tion and GAN inversion. While the former channels the
expressive power of GANs to dream new ideas in the form
of images, the latter connects those dreams to the knowl-
edge captured in annotations. In this way, our work brings
together what it means to dream and what it means to know.
Concretely, our paper makes the following contributions:
1. We propose a novel dataset generating framework,
called HandsOff, which unifies the fields of dataset
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7991
generation and GAN inversion. While prior meth-
ods for dataset generation [40] require new human
annotations on synthetically generated images, Hand-
sOff uses GAN inversion to train on existing labeled
datasets, eliminating the need for human annotations.
With≤50real labeled images, HandsOff is capable of
producing high quality image-label pairs (Sec. 3).
2. We demonstrate the HandsOff framework’s ability
to generate semantic segmentation masks, keypoint
heatmaps, and depth maps across several challeng-
ing domains (faces, cars, full body fashion poses,
and urban driving scenes) by evaluating performance
of a downstream task trained on our synthetic data
(Sec. 4.2, 4.3, and 4.4).
3. We show that HandsOff is capable of mitigating the
effects of the long-tail in semantic segmentation tasks.
By modifying the distribution of the training data,
HandsOff is capable of producing datasets that, when
used to train a downstream task, dramatically improve
performance in detecting long-tail parts (Sec. 4.5).
|
Xian_Neural_Lens_Modeling_CVPR_2023 | Abstract
Recent methods for 3D reconstruction and rendering in-
creasingly benefit from end-to-end optimization of the entire
image formation process. However, this approach is cur-
rently limited: effects of the optical hardware stack and in
particular lenses are hard to model in a unified way. This
limits the quality that can be achieved for camera calibra-
tion and the fidelity of the results of 3D reconstruction. In
this paper, we propose NeuroLens, a neural lens model for
distortion and vignetting that can be used for point projec-
tion and ray casting and can be optimized through both
operations. This means that it can (optionally) be used to
perform pre-capture calibration using classical calibration
targets, and can later be used to perform calibration or re-
finement during 3D reconstruction, e.g., while optimizing a
radiance field. To evaluate the performance of our proposed
model, we create a comprehensive dataset assembled from
the Lensfun database with a multitude of lenses. Using this
and other real-world datasets, we show that the quality of
our proposed lens model outperforms standard packages
as well as recent approaches while being much easier to
use and extend. The model generalizes across many lens
types and is trivial to integrate into existing 3D reconstruc-
tion and rendering systems. Visit our project website at:
https://neural-lens.github.io .
| 1. Introduction
Camera calibration is essential for many computer vision
applications: it is the crucial component mapping measure-
ments and predictions between images and the real world.
This makes calibration a fundamental building block of 3D
reconstruction and mapping applications, and of any system
that relies on spatial computing, such as autonomous driving
or augmented and virtual reality. Whereas camera extrin-
sics and the parameters of a pinhole model can be easily
described and optimized, this often does not hold for other
parameters of an optical system and, in particular, lenses. Yet
*Work done during an internship at RLR.
1The approach is visualized on FisheyeNeRF recordings [23].
Figure 1. Method Overview. The optical stack leads to light ray
distortion and vignetting. We show that invertible residual networks
are a powerful tool to model the distortion for projection and ray
casting across many lenses and in many scenarios. Additionally,
we propose a novel type of calibration board (top left) that can
optionally be used to improve calibration accuracy. For evaluation,
we propose the ‘SynLens’ dataset to evaluate lens models at scale.1
lenses have a fundamental influence on the captured image
through distortion and vignetting effects.
Recent results in 3D reconstruction and rendering sug-
gest that end-to-end modeling and optimization of the im-
age formation process leads to the highest fidelity scene
reproductions [33, 34]. Furthermore, per-pixel gradients are
readily available in this scenario and could serve as a means
to optimize a model of all components of the optical stack
to improve reconstruction quality. However, modeling and
optimizing lens parameters in full generality and also differ-
entiably is hard: camera lenses come in all kinds of forms
and shapes (e.g., pinhole, fisheye, catadioptric) with quite
different optical effects.
So how can we create a flexible and general and differ-
entiable lens model with enough parameters to approximate
any plausible distortion? In classical parametric models, the
internals of the camera are assumed to follow a model with a
limited number of parameters (usually a polynomial approx-
imation). These approaches work well when the distortion
is close to the approximated function, but cannot general-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8435
ize beyond that specific function class. On the other hand,
non-parametric models that associate each pixel with a 3D
ray have also been explored. These models are designed
to model any type of lens, but tend to require dense key-
point measurements due to over-parameterization. Hence,
we aim to find models with some level of regularization
to prevent such issues, without unnecessarily constraining
the complexity of the distortion function. Our key insight
is to use an invertible neural network (INN) to model ray
distortion, combined with standard camera intrinsics and
extrinsics. This means that we model the camera lens as a
mapping of two vector fields using a diffeomorphism ( i.e.,
a bijective mapping where both the mapping and its inverse
are differentiable), represented by an INN. This approach
usefully leverages the invertibility constraints provided by
INNs to model the underlying physics of the camera lens.
Our lens model has several advantages. Its formulation
makes it easy to differentiate point projection and ray cast-
ing operations in deep learning frameworks and it can be
integrated into anyend-to-end differentiable pipeline, with
an inductive bias that serves as a useful regularizer for lens
models. It is flexible: we can scale the model parameters
to adapt to different kinds of lenses.using gradient-based
methods for point projection as well as ray casting. This
makes our model applicable to pattern-based camera cal-
ibration as well as to dense reconstruction where camera
parameter refinement is desired. In the case of (optional)
marker-based calibration, we suggest to use an end-to-end
optimized marker board and keypoint detector. The proposed
marker board outperforms several other alternatives in our
experiments, and can easily be adjusted to be particularly
robust to distortions of different sensor and lens types.
It is currently impossible to evaluate lens models at scale
in a standardized way: large-scale camera lens benchmarks
including ground truth data simply do not exist. We pro-
pose to address this issue by generating a synthetic dataset,
called SynLens, consisting of more than 400 different lens
profiles from the open-source Lensfun database. To create
SynLens, we simulate distortion and vignetting and (option-
ally) keypoint extraction noise using real lens characteristics
to account for a wide variety of lenses and cameras.
We provide qualitative and quantitative comparisons with
prior works and show that our method produces more ac-
curate results in a wide range of settings, including pre-
calibration using marker boards, fine-tuning camera models
during 3D reconstruction, and using quantitative evaluation
on the proposed SynLens dataset. We show that our model
achieves subpixel accuracy even with just a few keypoints
and is robust to noisy keypoint detections. The proposed
method is conceptually simple and flexible, yet achieves
state-of-the-art results on calibration problems. We attribute
this success to the insight that an INN provides a useful in-
ductive bias for lens modeling and validate this design choisevia ablations on ResNet-based models. To summarize, we
claim the following contributions:
•A novel formulation and analysis of an invertible ResNet-
based lens distortion model that generalizes across many
lens types, is easy to implement and extend;
•A new way to jointly optimize marker and keypoint de-
tectors to increase the robustness of pattern-based cali-
bration;
•A large-scale camera lens benchmark for evaluating the
performance of marker detection and camera calibration;
•Integration of the proposed method into a neural ren-
dering pipeline as an example of purely photometric
calibration.
|
Wu_Deep_Stereo_Video_Inpainting_CVPR_2023 | Abstract
Stereo video inpainting aims to fill the missing regions
on the left and right views of the stereo video with plausi-
ble content simultaneously. Compared with the single video
inpainting that has achieved promising results using deep
convolutional neural networks, inpainting the missing re-
gions of stereo video has not been thoroughly explored. In
essence, apart from the spatial and temporal consistency
that single video inpainting needs to achieve, another key
challenge for stereo video inpainting is to maintain the
stereo consistency between left and right views and hence
alleviate the 3D fatigue for viewers. In this paper, we pro-
pose a novel deep stereo video inpainting network named
SVINet, which is the first attempt for stereo video inpainting
task utilizing deep convolutional neural networks. SVINet
first utilizes a self-supervised flow-guided deformable tem-
poral alignment module to align the features on the left
and right view branches, respectively. Then, the aligned
features are fed into a shared adaptive feature aggrega-
tion module to generate missing contents of their respec-
tive branches. Finally, the parallax attention module (PAM)
that uses the cross-view information to consider the signif-
icant stereo correlation is introduced to fuse the completed
features of left and right views. Furthermore, we develop
a stereo consistency loss to regularize the trained parame-
ters, so that our model is able to yield high-quality stereo
video inpainting results with better stereo consistency. Ex-
perimental results demonstrate that our SVINet outperforms
state-of-the-art single video inpainting models.
| 1. Introduction
Video inpainting aims to fill in missing region with plau-
sible and coherent contents for all video frames. As a fun-
damental task in computer vision, video inpainting is usu-
ally adopted to enhance visual quality. It has great value
in many practical applications, such as scratch restora-
*Corresponding author
Input E2FGVI FGT Ours Input E2FGVI FGT OursFigure 1. An example of visual comparison with state-of-the-art
single video inpainting models (E2FGVI [23] and FGT [48]) on
stereo video inpainting. As shown here, directly using the single
video inpainting method to generate missing contents on the left
view (first row) and right view (second row) will lead to severe
stereo inconsistency. In contrast, the proposed method not only
generates vivid textures, but also the parallax flow (third row) be-
tween two views is closer to the ground-truth (third row of the
input column). The closer the parallax flow is to ground-truth, the
better the stereo consistency is maintained.
tion [2], undesired object removal [34], and autonomous
driving [24]. In recent years, relying on the powerful fea-
tures extraction capabilities of convolutional neural net-
work (CNN), existing deep single video inpainting meth-
ods [6, 13, 15, 18, 20, 23, 42, 46] have shown great success.
With the development of augmented reality (AR), virtual re-
ality (VR) devices, dual-lens smartphones, and autonomous
robots, there is an increasing demand for various stereo
video processing techniques, including stereo video inpaint-
ing. For example, in some scenarios, we not only remove
objects and edit contents, but also expect to recover the
missing regions in the stereo video. Although the traditional
stereo video inpainting methods [31,32] based on patch op-
timization have been preliminarily studied, the stereo video
inpainting based on deep learning has not been explored.
A naive solution of stereo video inpainting is to directly
apply the single video inpainting methods by completing
the missing regions of left and right views, respectively.
However, inpainting an individual video that only considers
the undamaged spatial-temporal statistics of one view will
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5693
ignore the geometric relationship between two views, caus-
ing severe stereo inconsistency as shown in Fig. 1. Besides,
another way to solve this task is process the stereo video
frame-by-frame using the stereo image inpainting methods.
For example, Li et al. [22] designed a Geometry-Aware At-
tention (GAA) module to learn the geometry-aware guid-
ance from one view to another, so as to make the corre-
sponding regions in the inpainted stereo images consistent.
Nevertheless, compared to its image counterpart, stereo
video inpainting still needs to concern the temporal con-
sistency. In this way, satisfactory performance cannot be
achieved by extending stereo image inpainting technique to
stereo video inpainting task. Therefore, to maintain tem-
poral and stereo consistency simultaneously, there are two
key points need to be considered: (i) temporal modeling be-
tween consecutive frames (ii) correlation modeling between
left view and right view.
In fact, on the one hand, the missing contents in one
frame may exist in neighboring (reference) frames of a
video sequence. Thus, the temporal information between
the consecutive frames can be explored to generate miss-
ing contents of the current (target) frame. For example, a
classical technology pipeline is “alignment–aggregation”,
that is, the reference frame is first aligned to eliminate im-
age changes between the reference frame and target frame,
and then the aligned reference frame is aggregated to gener-
ate the missing contents of the target frame. On the other
hand, correlation modeling between two views has been
studied extensively in the stereo image super-resolution
task [3, 39, 44]. For instance, Wang et al. [39] proposed
the parallax attention module (PAM) to tackle the vary-
ing parallax problem in the parallax attention stereo super-
resolution network (PASSRnet). Ying et al. [44] developed
a stereo attention module (SAM) to address the informa-
tion incorporation issue in the stereo image super-resolution
models. More recently, Chen et al. [3] designed a cross-
parallax attention module (CPAM) which can capture the
stereo correspondence of respective additional information.
Motivated by above observation and analysis, in this pa-
per, we propose a stereo video inpainting network, named
SVINet. Specifically, SVINet first utilizes a self-supervised
flow-guided deformable temporal alignment module to
align the reference frames on the left and right view
branches at the feature level, respectively. Such operation
can eliminate the negative effect of image changes caused
by camera or object motion. Then, the aligned reference
frame features are fed into a shared adaptive feature aggre-
gation module to generate missing contents of their respec-
tive branches. Note that the missing contents of one view
may also exist in another view, we also introduce the most
relevant target frame from another view when completing
the missing regions of the current view, which can avoid the
computational complexity problem caused by simply aggre-gating all video frames. Finally, a modified PAM is used
to model the stereo correlation between the completed fea-
tures of the left and right views. Beyond that, inspired by
the success of end-point error (EPE) [10] in optical flow es-
timation [11], we introduce a new stereo consistency loss to
regularize training parameters, so that our model is able to
yield high-quality stereo video inpainting results with better
stereo consistency. We conduct extensive experiments on
two benchmark datasets, and the experimental results show
that our SVINet surpasses the performance of recent single
video inpainting methods in the stereo video inpainting.
To sum up, our contributions are summarized as follows:
• We propose a novel end-to-end stereo video inpainting
network named SVINet, where the spatially, tempo-
rally, and stereo consistent missing contents for cor-
rupted stereo video are generated. To the best of our
knowledge, this is the first work using deep learning to
solve stereo video inpainting task.
• Inspired by the end-point error (EPE) [10], we design
a stereo consistency loss to regularize training parame-
ters of SVINet, so that the training model can improve
the stereo consistency of the completed results.
• Experiments on two benchmark datasets demonstrate
the superiority of our proposed method in both quanti-
tative and qualitative evaluations. Notably, our method
also shed light on the subsequent research of stereo
video inpainting.
|
Wanyan_Active_Exploration_of_Multimodal_Complementarity_for_Few-Shot_Action_Recognition_CVPR_2023 | Abstract
Recently, few-shot action recognition receives increasing
attention and achieves remarkable progress. However, pre-
vious methods mainly rely on limited unimodal data (e.g.,
RGB frames) while the multimodal information remains rel-
atively underexplored. In this paper, we propose a novel
Active Multimodal Few-shot Action Recognition (AMFAR)
framework, which can actively find the reliable modality for
each sample based on task-dependent context information
to improve few-shot reasoning procedure. In meta-training,
we design an Active Sample Selection (ASS) module to or-
ganize query samples with large differences in the reliabil-
ity of modalities into different groups based on modality-
specific posterior distributions. In addition, we design an
Active Mutual Distillation (AMD) to capture discrimina-
tive task-specific knowledge from the reliable modality to
improve the representation learning of unreliable modal-
ity by bidirectional knowledge distillation. In meta-test,
we adopt Adaptive Multimodal Inference (AMI) to adap-
tively fuse the modality-specific posterior distributions with
a larger weight on the reliable modality. Extensive experi-
mental results on four public benchmarks demonstrate that
our model achieves significant improvements over existing
unimodal and multimodal methods.
| 1. Introduction
Over the past years, action recognition [20, 34, 52, 73]
has achieved significant progress with the emerge of deep
learning. However, these existing deep methods require a
large amount of labeled videos to guarantee their perfor-
mance. In practice, it is sometimes expensive or even im-
possible to collect abundant annotated data, which limits the
effectiveness of supervised methods. In order to deal with
this problem, more and more researchers begin to focus on
the few-shot action recognition (FSAR) task, which aims at
*corresponding author: Changsheng Xu.
Task 1
Task 2
Ballet Dancing ( )
Snowboarding ( )
Skateboarding ( )RGB Feature Space Flow Feature Space
query sample support sampleFigure 1. Illustration of multimodal few-shot action recognition
task. The main challenge is that the contribution of a specific
modality highly depends on task-specific contextual information.
classifying unlabeled videos (query set) from novel action
classes with the help of only a few annotated samples (sup-
port set).
Recently, researchers have proposed many promising
few-shot action recognition methods, which can be roughly
divided into two groups: data augmentation-based methods
and alignment-based methods. Data augmentation-based
methods try to generate additional training data [18], self-
supervision signals [72] or auxiliary information [22, 69] to
promote robust representation learning. Alignment-based
methods [5,8,44,58,66,69,72] focus on matching the video
frames or segments in the temporal or spatial dimension to
measure the distance between query and support samples in
a fine-grained manner.
Although existing few-shot action recognition methods
have achieved remarkable performance, they mainly rely on
limited unimodal data (e.g. RGB frames) that are always
insufficient to reflect complex characteristics of human ac-
tions. When learning novel concepts from a few samples,
humans have the ability to integrate the multimodal percep-
tions (e.g. appearance, audio and motion) to enhance the
recognition procedure. In addition, in conventional action
recognition, many top-performing methods [23, 46, 56, 60]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6492
always involve multiple modalities (e.g. vision, optical flow
and audio) which can provide complementary information
to comprehensively identify different actions. Whereas, the
multimodal information remains relatively underexplored
in few-shot action recognition where the data scarcity issue
magnifies the defect of unimodal data.
In this paper, we study multimodal few-shot action
recognition task , where the query and support samples are
multimodal as shown in Figure 1. With multimodal data,
we can alleviate the data scarcity issue through the com-
plementarity of different modalities. However, exploring
the multimodal complementarity in few-shot action recog-
nition is nontrivial. On the one hand, although there are
many widely used methods for fusing multimodal data, e.g.,
early fusion [47], late fusion [38, 64], it is still questionable
whether existing methods are suitable to be directly applied
in the few-shot scenario where only a few samples are avail-
able for each action class. On the other hand, the contri-
bution of a specific modality is not consistent for different
query samples and it highly depends on the contextual infor-
mation of both query and support samples in each few-shot
task. For example, as shown in Figure 1, if the few-shot task
is to identify query samples from the two action classes of
Snowboarding andBallet dancing , the RGB data
and optical flow are equally important and they can com-
plement each other well to distinguish these two classes. In
contrast, for the two action classes of Snowboarding and
Skateboarding , the optical flow cannot provide use-
ful discriminative features to complement the vision infor-
mation or even harm the few-shot recognition performance
due to the motion resemblance between these two actions.
Therefore, we argue that it requires a task-dependent strat-
egy for exploring the complementarity between different
modalities in few-shot action recognition.
In order to reasonably take advantage of the comple-
mentarity between different modalities, we propose an Ac-
tive Multimodal Few-shot Action Recognition (AMFAR)
framework inspired by active learning [6], which can ac-
tively find the more reliable modality for each query sam-
ple to improve the few-shot reasoning procedure. AMFAR
adopts the episode-wise learning framework [53,63], where
each episode has a few labeled support samples and the un-
labeled query samples that need to be recognized. In each
episode of the meta-training , we firstly adopt modality-
specific backbone networks to extract the multimodal rep-
resentations for query samples and the prototypes of differ-
ent actions for support samples. We further compute the
modality-specific posterior distributions based on query-to-
prototype distances. Then, we adopt Active Sample Se-
lection (ASS) to organize query samples with large differ-
ences in the reliability of two modalities into two groups,
i.e., RGB-dominant group that contains samples where the
RGB modality is more reliable for conducting action recog-nition in the current episode, and Flow-dominant group
where the optical flow is more reliable. For each query
sample, the reliability of a specific modality is estimated ac-
cording to certainties of the modality-specific posterior dis-
tribution. Next, we design an Active Mutual Distillation
(AMD) mechanism to capture discriminative task-specific
knowledge from the reliable modality to improve the rep-
resentation learning of unreliable modality by bidirectional
knowledge guiding streams between modalities. For each
query in the RGB-dominant group, the RGB modality is
regarded as teacher while the optical flow is regarded as
student, and the query-to-prototype relation knowledge is
transferred from the teacher to the student with a distilla-
tion constraint. Simultaneously, for each query in the Flow-
dominant group, optical flow is regarded as teacher while
RGB is regarded as student, and the knowledge distillation
is conducted in the opposite direction. In the meta-test
phase , we adopt Adaptive Multimodal Inference (AMI)
to conduct the few-shot inference for each query sample
by adaptively fusing the posterior distributions predicted
from different modalities with a larger weight on the reli-
able modality.
In summary, the main contributions of this paper are
fourfold: 1) We exploit the natural complementarity be-
tween different modalities to enhance the few-shot action
recognition procedure by actively finding the more reli-
able modality for each query sample. To our best knowl-
edge, we are the first to adopt the idea of active learn-
ing to explore the multimodal complementarity in few-shot
learning. 2) We propose an active mutual distillation strat-
egy to transfer task-dependent knowledge learned from the
reliable modality to guide the representation learning for
the unreliable modality, which can improve the discrimi-
native ability of the unreliable modality with the help of the
multimodal complementarity. 3) We propose an adaptive
multimodal few-shot inference approach to fuse modality-
specific results by paying more attention to the reliable
modality. 4) We conduct extensive experiments on four
challenging datasets and the results demonstrate that the
proposed method outperforms existing unimodal and mul-
timodal methods.
|
Wu_Fast_Point_Cloud_Generation_With_Straight_Flows_CVPR_2023 | Abstract
Diffusion models have emerged as a powerful tool for
point cloud generation. A key component that drives the
impressive performance for generating high-quality sam-
ples from noise is iteratively denoise for thousands of steps.
While beneficial, the complexity of learning steps has lim-
ited its applications to many 3D real-world. To address this
limitation, we propose Point Straight Flow (PSF), a model
that exhibits impressive performance using one step. Our
idea is based on the reformulation of the standard diffusion
model, which optimizes the curvy learning trajectory into
a straight path. Further, we develop a distillation strategy
to shorten the straight path into one step without a perfor-
mance loss, enabling applications to 3D real-world with
latency constraints. We perform evaluations on multiple
3D tasks and find that our PSF performs comparably to the
standard diffusion model, outperforming other efficient 3D
point cloud generation methods. On real-world applications
such as point cloud completion and training-free text-guided
generation in a low-latency setup, PSF performs favorably.
| 1. Introduction
3D point cloud generation has many real-world appli-
cations across vision and robotics, including self-driving
and virtual reality. A lot of efforts have been devoted
to realistic 3D point cloud generation, such as V AE [ 14],
GAN [ 1,36], Normalizing Flow [ 13,16,43] and score-based
method [ 5,27,47,50], and diffusion model [ 27,47,50].
Among them, diffusion models gain increasing popularity
for generating realistic and diverse shapes by separating
the distribution map learning from a noise distribution to a
meaningful shape distribution into thousands of steps.
Despite the foregoing advantages, the transport trajectory
learning from a noise distribution to a meaningful shape
distribution also turns out to be a major efficiency bottle-
neck during inference since a diffusion model requires thou-
sands of generative steps to produce high-quality and diverse
shapes [ 11,37,50]. As a result, it leads to high computa-tion costs for generating meaningful point cloud in prac-
tice. Notice that the learning transport trajectory follows the
simulation process of solving stochastic differentiable equa-
tion (SDE). A trained neural SDE can have different distri-
bution mappings at each step, which makes the acceleration
challenging even with an advanced ordinary differentiable
equation (ODE) solver.
To address this challenge, several recent works have pro-
posed strategies that avoid using thousands of steps for the
meaningful 3D point cloud generation. For example, [ 26,35]
suggest distilling the high-quality 3D point cloud generator,
DDIM model [ 37], into a few-step or one-step generator.
While the computation cost is reduced by applying distilla-
tion to shorten the DDIM trajectory into one-step or few-step
generator. The distillation process learns a direct mapping
between the initial state and the final state of DDIM, which
needs to compress hundreds of irregular steps into one-step.
Empirically it leads to an obvious performance drop. Further,
these distillation techniques are mainly developed for gener-
ating images with the grid structure, which is unsuitable for
applying to point cloud generation since the point cloud is
an unordered set of points with irregular structures.
In this paper, we propose using one-step to generate 3D
point clouds. Our method, Point Straight Flow (PSF), learns
a straight generative transport trajectory from a noisy point
cloud to a desired 3D shape for acceleration. This is achieved
by passing the neural flow model once to estimate the trans-
port trajectory. Specifically, we first formulate an ODE trans-
port flow as the initial 3D generator with a simpler trajectory
compared with the diffusion model formulated in SDE. Then
we optimize the transport flow cost for the initial flow model
to significantly straighten the learning trajectory while main-
taining the model’s performance by adopting the ideas from
recent works [ 20,22]. This leads to a straight flow by op-
timizing the curvy learning trajectory into a straight path.
Lastly, with the straight transport trajectory, we further de-
sign a distillation technique to shorten the path into one-step
for 3D point cloud generation.
To evaluate our method, we undertake an extensive set
of experiments on 3D point cloud tasks. We first verify
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9445
Random pointsShape manifold points(a)(b)(c)(d)ReflowDistillPSF (ours)Diffusion
GenerateTrain velocity flow modelImproving straightnessFlow distillationFigure 1. Trajectories during the generate process for the point cloud generation. (a) The SDE (PVD) trajectory involves random noise in
each step and thus gives a curvy trajectory. (b) The PSF initial flow model removes the random noise term and gets a simulation procedure
trajectories with smaller transport cost. (c) By utilizing the reflow process on the initial flow model, we reduce the transport cost. As a result,
the trajectories are becoming straightened and easy to simulate with one step. (d) The straight path leads to a small time-discrimination error
during the simulation, which makes the model easy to distill into one-step.
that our one-step PSF can generate high-quality point cloud
shapes, performing favorably relative to the diffusion-based
point cloud generator PVD [ 50] with a more than 700⇥
faster sampling on the unconditional 3D shape generation
task. Further, we demonstrate it is highly important to learn
straight generative transport trajectory for faster sampling by
comparing distillation baselines, including DDIM that are
difficult to generate shapes even with many more generative
steps. Finally, we perform evaluations on 3D real-world
applications, including point cloud completion and training-
free text-guided generation, to show the flexibility of our
one-step PSF generator.
•For the first time, we demonstrate that neural flow
trained with one step can generate high-quality 3D point
clouds by applying distillation strategy.
•We propose a novel 3D point cloud generative model,
Point Straight Flow (PSF). Our proposed PSF optimizes
the curvy transport trajectory between noisy samples
and meaningful point cloud shapes as a straight path.
•We show our PSF can generate 3D shapes with high-
efficiency on standard benchmarks such as uncondi-
tional shape generation and point cloud completion. We
also successfully extend PSF to real-world 3D appli-
cations including large-scene completion and training-
free text-guided generation.
|
Wang_LiDAR2Map_In_Defense_of_LiDAR-Based_Semantic_Map_Construction_Using_Online_CVPR_2023 | Abstract
Semantic map construction under bird’s-eye view (BEV)
plays an essential role in autonomous driving. In contrast
to camera image, LiDAR provides the accurate 3D obser-
vations to project the captured 3D features onto BEV space
inherently. However, the vanilla LiDAR-based BEV feature
often contains many indefinite noises, where the spatial fea-
tures have little texture and semantic cues. In this paper,
we propose an effective LiDAR-based method to build se-
mantic map. Specifically, we introduce a BEV pyramid fea-
ture decoder that learns the robust multi-scale BEV fea-
tures for semantic map construction, which greatly boosts
the accuracy of the LiDAR-based method. To mitigate the
defects caused by lacking semantic cues in LiDAR data, we
present an online Camera-to-LiDAR distillation scheme to
facilitate the semantic learning from image to point cloud.
Our distillation scheme consists of feature-level and logit-
level distillation to absorb the semantic information from
camera in BEV . The experimental results on challenging
nuScenes dataset demonstrate the efficacy of our proposed
LiDAR2Map on semantic map construction, which signifi-
cantly outperforms the previous LiDAR-based methods over
27.9% mIoU and even performs better than the state-of-the-
art camera-based approaches. Source code is available at:
https://github.com/songw-zju/LiDAR2Map.
| 1. Introduction
High-definition (HD) map contains the enriched seman-
tic understanding of elements on road, which is a fundamen-
tal module for navigation and path planning in autonomous
driving. Recently, online semantic map construction has at-
tracted increasing attention, which enables to construct HD
map at runtime with onboard LiDAR and cameras. It pro-
vides a compact way to model the environment around the
ego vehicle, which is convenient to obtain the essential in-
formation for the downstream tasks.
Most of recent online approaches treat semantic map
*Corresponding author is Jianke Zhu.
xyz
LiDAR
3D Features3D-to-BEV
Flatten
BEV FeaturesHead
BackboneSeg. (b) LiDAR-based Method
(c) Camera -LiDAR Fusion Method
PV-to-BEV
Transform
Camera
PV FeaturesBackbonexyz
LiDAR
Backbone3D-to-BEV
Flatten
HeadSeg. 3D Features
MapCamera Seg.
Head
PV-to-BEV
Transform Backbone
Seg.
Head
xyz
LiDAR
Backbone3D-to-BEV
Flatten
3D Features
PV FeaturesDistill.(d) Our proposed LiDAR2MapMapBackbone
PV FeaturesPV-to-BEV
Transform
BEV FeaturesSeg.
Head
Camera(a) Camera -based Method
Map
Map
MapFigure 1. Comparisons on semantic map construction frameworks
(camera-based, LiDAR-based, Camera-LiDAR fusion methods)
and our proposed LiDAR2Map that presents an effective online
Camera-to-LiDAR distillation scheme with a BEV feature pyra-
mid decoder in training.
learning as a segmentation problem in bird’s-eye view
(BEV), which assign each map pixel with a category label.
As shown in Fig. 1, the existing methods can be roughly
divided into three groups, including camera-based meth-
ods [19, 20, 30, 32, 52], LiDAR-based methods [10, 19] and
Camera-LiDAR fusion methods [19, 27, 37]. Among them,
camera-based methods are able to make full use of multi-
view images with the enriched semantic information, which
dominate this task with the promising performance. In con-
trast to camera image, LiDAR outputs the accurate 3D spa-
tial information that can be used to project the captured
features onto the BEV space. By taking advantage of the
geometric and spatial information, LiDAR-based methods
are widely explored in 3D object detection [18, 38, 47, 57]
while it is rarely investigated in semantic map construction.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5186
HDMapNet-LiDAR [19] intends to directly utilize the Li-
DAR data for map segmentation, however, it performs in-
ferior to the camera-based models due to the vanilla BEV
feature with the indefinite noises. Besides, map segmenta-
tion is a semantic-oriented task [27] while the semantic cues
in LiDAR are not as rich as those in image. In this work, we
aim to exploit the LiDAR-based semantic map construction
by taking advantage of the global spatial information and
auxiliary semantic density from the image features.
In this paper, we introduce an efficient framework for se-
mantic map construction, named LiDAR2Map, which fully
exhibits the potentials of LiDAR-based model. Firstly,
we present an effective decoder to learn the robust multi-
scale BEV feature representations from the accurate spa-
tial point cloud information for semantic map. It provides
distinct responses and boosts the accuracy of our baseline
model. To make full use of the abundant semantic cues
from camera, we then suggest a novel online Camera-to-
LiDAR distillation scheme to further promote the LiDAR-
based model. It fully utilizes the semantic features from the
image-based network with a position-guided feature fusion
module (PGF2M). Both the feature-level and logit-level dis-
tillation are performed in the unified BEV space to facili-
tate the LiDAR-based network to absorb the semantic rep-
resentation during the training. Specially, we suggest to
generate the global affinity map with the input low-level
and high-level feature guidance for the satisfactory feature-
level distillation. The inference process of LiDAR2Map is
efficient and direct without the computational cost of dis-
tillation scheme and auxiliary camera-based branch. Ex-
tensive experiments on the challenging nuScenes bench-
mark [4] show that our proposed model significantly outper-
forms the conventional LiDAR-based method (29.5% mIoU
vs.57.4% mIoU). It even performs better than the state-of-
the-art camera-based methods by a large margin.
Our main contributions are summarized as: 1) an effi-
cient framework LiDAR2Map for semantic map construc-
tion, where the presented BEV pyramid feature decoder can
learn the robust BEV feature representations to boost the
baseline of our LiDAR-based model; 2) an effective online
Camera-to-LiDAR distillation scheme that performs both
feature-level and logit-level distillation during the training
to fully absorb the semantic representations from the im-
ages; 3) extensive experiments on nuScenes for semantic
map construction including map and vehicle segmentation
under different settings, shows the promising performance
of our proposed LiDAR2Map.
|
Xie_An_Actor-Centric_Causality_Graph_for_Asynchronous_Temporal_Inference_in_Group_CVPR_2023 | Abstract
The causality relation modeling remains a challenging
task for group activity recognition. The causality relations
describe the influence on the centric actor (effect actor)
from its correlative actors (cause actors). Most existing
graph models focus on learning the actor relation with syn-
chronous temporal features, which is insufficient to deal
with the causality relation with asynchronous temporal fea-
tures. In this paper, we propose an Actor-Centric Causal-
ity Graph Model, which learns the asynchronous temporal
causality relation with three modules, i.e., an asynchronous
temporal causality relation detection module, a causality
feature fusion module, and a causality relation graph infer-
ence module. First, given a centric actor and its correlative
actor, we analyze their influences to detect causality rela-
tion. We estimate the self influence of the centric actor with
self regression. We estimate the correlative influence from
the correlative actor to the centric actor with correlative
regression, which uses asynchronous features at different
timestamps. Second, we synchronize the two action features
by estimating the temporal delay between the cause action
and the effect action. The synchronized features are used to
enhance the feature of the effect action with a channel-wise
fusion. Third, we describe the nodes (actors) with causal-
ity features and learn the edges by fusing the causality re-
lation with the appearance relation and distance relation.
The causality relation graph inference provides crucial fea-
tures of effect action, which are complementary to the base
model using synchronous relation inference. Experiments
show that our method achieves state-of-the-art performance
on the Volleyball dataset and Collective Activity dataset.
| 1. Introduction
Group activity recognition is a challenging task to iden-
tify the group activity by analyzing the actors that perform
*Corresponding author.
Figure 1. Illustration of the causality relation. (a) Asynchronous
causality relation. (b) Synchronous temporal relation graph. (c)
Asynchronous temporal causality relation detection and fusion.
(d) Asynchronous temporal causality relation graph learns the in-
fluences of two actors to detect causality relation. In this work, we
enforce the graph model with asynchronous causality relation by
analyzing the actors’ influences.
different actions. It has been widely used in many appli-
cations in video surveillance, social interaction analysis,
and sports analysis [4, 14, 25, 40]. Unlike individual action
recognition, group activity recognition learns the relation
between actors to infer the group activity. The relation be-
tween two actors can be explained as cause-effect relation
(denoted as causality relation), in which the action of one
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6652
actor (denoted as the cause actor) impacts the action of an-
other actor (denoted as the effect actor). The two actors im-
pact each other individually resulting in different causality
relations. As the effect action performs after the cause ac-
tion with a temporal delay, the asynchronous temporal fea-
tures of the two actors hinder causality relation learning.
Existing methods describe the relation with the appearance
feature [15] and the position feature [24, 36]. The above
methods merely learn the spatial relation in each frame,
which neglect to describe the relation with the temporal
dynamics in the frame sequence. Some methods describe
the relation with the temporal feature learned by RNN [28]
and Transformer network [18]. The existing methods al-
ways learn the relation at the same timestamp with the syn-
chronous temporal feature, and neglect to describe the in-
fluences of two actors, who have asynchronous temporal
relation. Therefore, it is still challenging to capture the rela-
tion with asynchronous temporal features for group activity
recognition.
As the causality relation is asynchronous, we decompose
our graph model with asynchronous causality relation into
two sub-tasks illustrated in Figure 1b: (1) the causality rela-
tion detection task by analyzing the influences of two actors
with their asynchronous features, and (2) the asynchronous
causality feature fusion task by integrating the feature of the
effect actor with the synchronized feature of the cause actor.
Figure 1 shows the group activity prediction influenced
by different actors. (1) In Figure 1a, with a temporal delay
after the cause action performs (digging), the cause actor
changes to the action ”falling”, and the effect actor changes
to the action ”jumping”. Actors change their states with
a temporal delay in an asynchronous way, which hinders
cause-effect (causality) relation learning. (2) As shown in
Figure 1b, the traditional method addresses the group ac-
tivity using a synchronous temporal relation graph, which
contains a large number of irrelevant relations. The syn-
chronous graph is hard to detect the cause-effect relation
with the synchronous features at a single timestamp. With-
out learning the asynchronous causality relation, the model
can not explain why the effect actor jumps. For example, it
uses the falling cause actor to mispredict group activity as
the ”Right pass”. (3) To detect the causality relations be-
tween the centric actor and its correlative actor, we focus
on learning the influences with their asynchronous features.
When the centric actor is affected by the correlative actor,
the influence of two actors is larger than the influence of
the centric actor itself. The influences of two actors can de-
tect the causality relation from the correlative actor to the
centric actor. As shown in Figure 1c, after the correlative
actor moves, the centric actor jumps with the temporal de-
lay of one frame (+1). Then, the causality relation is used to
enhance the centric actor features by fusing two actors’ fea-
tures. (4) When we learn asynchronous causality relationsto form a causality relation graph, which can select the rel-
evant edges and enhance node features. In Figure 1d, the
causality inference process finds the relation from the mov-
ing actor to the jumping actor, and helps to explain the actor
jumps for setting the volleyball. The causality relation ana-
lyzes the influences learned with the asynchronous tempo-
ral features, which is complementary to the relation learned
with synchronous temporal features. The framework by in-
tegrating two relation graphs can successfully predict the
group activity as the “Right set”.
In this paper, we propose an Actor-Centric Causality
Graph (ACCG) Model to detect the asynchronous causal-
ity relation for group activity recognition. Figure 2 shows
the overview of the proposed model. The model consists
of three modules, i.e., an asynchronous temporal causality
relation detection module, a causality feature fusion mod-
ule, and a causality relation inference module. First, we
detect the causality relation between the centric actor and
its correlative actor by analyzing the self influence and the
correlative influence. We learn the self influence with self
regression, and learn the correlative influence with correl-
ative regression. We extend the correlative regression with
asynchronous features of the correlative actor, which helps
to learn the asynchronous causality relation by analyzing
the influences of the two actors. Second, the temporal delay
of the causality relation is estimated to synchronize two ac-
tion features. We integrate them with a channel-wise fusion
to learn the causality feature of the effect actor. Third, we
describe the actors (nodes) with asynchronous causality fea-
tures, and describe the edges with the causality relation for
graph inference. The causality relation inference provides
the crucial features of actors, which are complementary to
synchronous relation inference. We apply the base model to
learn the synchronous relation inference, and add two rela-
tion inferences to enhance the group relation learning.
Our contributions are summarized as follows:
(1) We propose an Actor-Centric Causality Graph
Model, which detects the asynchronous causality relation
by analyzing the influences of two actors at different times-
tamps. We design the self regression to estimate the self
influence of the centric actor. We design the corelative re-
gression with the asynchronous features of two actors to es-
timate their correlative influence.
(2) We design a causality feature fusion to enhance the
feature of the centric actor by integrating it with the syn-
chronized feature of its correlative actor. The synchronized
feature is generated by estimating the temporal delay be-
tween the asynchronous features of two actors.
(3) Our Actor-Centric Causality Graph Model learns
the asynchronous relation, which is complementary to syn-
chronous relation learning. Our framework integrates two
relations and achieves state-of-the-art performance on the
V olleyball dataset and Collective Activity dataset.
6653
Figure 2. The overview of the actor-centric causality graph model. The base model learns the relation with synchronous features at each
timestamp. The actor-centric causality graph is proposed to analyze the influence of two actors with asynchronous features, which can
learn the causality relation in the asynchronous temporal causality relation detection module. The causality feature fusion model enhances
the centric action with synchronized correlative action features. The causality relation graph inference module learns the contextual feature
with causality relation. The framework combines the causality relation graph and the base graph for better graph relation learning.
|
Wei_TAPS3D_Text-Guided_3D_Textured_Shape_Generation_From_Pseudo_Supervision_CVPR_2023 | Abstract
In this paper, we investigate an open research task
of generating controllable 3D textured shapes from the
given textual descriptions. Previous works either require
ground truth caption labeling or extensive optimization
time. To resolve these issues, we present a novel frame-
work, TAPS3D, to train a text-guided 3D shape generator
with pseudo captions. Specifically, based on rendered 2D
images, we retrieve relevant words from the CLIP vocab-
ulary and construct pseudo captions using templates. Our
constructed captions provide high-level semantic supervi-
sion for generated 3D shapes. Further, in order to pro-
duce fine-grained textures and increase geometry diversity,
we propose to adopt low-level image regularization to en-
able fake-rendered images to align with the real ones. Dur-
ing the inference phase, our proposed model can generate
3D textured shapes from the given text without any addi-
tional optimization. We conduct extensive experiments to
analyze each of our proposed components and show the
efficacy of our framework in generating high-fidelity 3D
textured and text-relevant shapes. Code is available at
https://github.com/plusmultiply/TAPS3D
| 1. Introduction
3D objects are essential in various applications [21–24],
such as video games, film special effects, and virtual real-
∗Equal contribution. Work done during an internship at Bytedance.
†Corresponding author.ity. However, realistic and detailed 3D object models are
usually hand-crafted by well-trained artists and engineers
slowly and tediously. To expedite this process, many re-
search works [3,9,13,34,48,49] use deep generative models
to achieve automatic 3D object generation. However, these
models are primarily unconditioned, which can hardly gen-
erate objects as humans will.
In order to control the generated 3D objects from text,
prior text-to-3D generation works [43, 44] leverage the pre-
trained vision-language alignment model CLIP [37], such
that they can only use 3D shape data to achieve zero-shot
learning. For example, Dream Fields [13] combines the ad-
vantages of CLIP and NeRF [27], which can produce both
3D representations and renderings. However, Dream Fields
costs about 70 minutes on 8 TPU cores to produce a sin-
gle result. This means the optimization time during the
inference phase is too slow to use in practice. Later on,
GET3D [9] is proposed with faster inference time, which
incorporates StyleGAN [15] and Deep Marching Tetrahe-
dral (DMTet) [46] as the texture and geometry generators
respectively. Since GET3D adopts a pretrained model to do
text-guided synthesis, they can finish optimization in less
time than Dream Fields. But the requirement of test-time
optimization still limits its application scenarios. CLIP-
NeRF [50] utilizes conditional radiance fields [45] to avoid
test-time optimization, but it requires ground truth text data
for the training purpose. Therefore, CLIP-NeRF is only ap-
plicable to a few object classes that have labeled text data
for training, and its generation quality is restricted by the
NeRF capacity.
To address the aforementioned limitations, we propose
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16805
to generate pseudo captions for 3D shape data based on
their rendered 2D images and construct a large amount of
⟨3D shape, pseudo captions ⟩as training data, such that
the text-guided 3D generation model can be trained over
them. To this end, we propose a novel framework for
Text-guided 3D textured sh Ape generation from Pseudo
Supervision (TAPS3D), in which we can generate high-
quality 3D shapes without requiring annotated text training
data or test-time optimization.
Specifically, our proposed framework is composed of
two modules, where the first generates pseudo captions for
3D shapes and feeds them into a 3D generator to con-
duct text-guided training within the second module. In the
pseudo caption generation module, we follow the language-
free text-to-image learning scheme [20, 54]. We first adopt
the CLIP model to retrieve relevant words from given ren-
dered images. Then we construct multiple candidate sen-
tences based on the retrieved words and pick sentences hav-
ing the highest CLIP similarity scores with the given im-
ages. The selected sentences are used as our pseudo cap-
tions for each 3D shape sample.
Following the notable progress of text-to-image gener-
ation models [29, 38, 39, 42, 52], we use text-conditioned
GAN architecture in the text-guided 3D generator training
part. We adopt the pretrained GET3D [9] model as our
backbone network since it has been demonstrated to gen-
erate high-fidelity 3D textured shapes across various object
classes. We input the pseudo captions as the generator con-
ditions and supervise the training process with high-level
CLIP supervision in an attempt to control the generated 3D
shapes. Moreover, we introduce a low-level image regu-
larization loss to produce fine-grained textures and increase
geometry diversity. We empirically train the mapping net-
works only of a pretrained GET3D model so that the train-
ing is stable and fast, and also, the generation quality of the
pretrained model can be preserved.
Our proposed model TAPS3D can produce high-quality
3D textured shapes with strong text control as shown in
Fig. 1, without any per prompt test-time optimization. Our
contribution can be summarized as:
• We introduce a new 3D textured shape generative
framework, which can generate high-quality and fi-
delity 3D shapes without requiring paired text and 3D
shape training data.
• We propose a simple pseudo caption generation
method that enables text-conditioned 3D generator
training, such that the model can generate text-
controlled 3D textured shapes without test time opti-
mization, and significantly reduce the time cost.
• We introduce a low-level image regularization loss on
top of the high-level CLIP loss in an attempt to produce
fine-grained textures and increase geometry diversity. |
Wang_Multimodal_Industrial_Anomaly_Detection_via_Hybrid_Fusion_CVPR_2023 | Abstract
2D-based Industrial Anomaly Detection has been widely
discussed, however, multimodal industrial anomaly detec-
tion based on 3D point clouds and RGB images still has
many untouched fields. Existing multimodal industrial
anomaly detection methods directly concatenate the mul-
timodal features, which leads to a strong disturbance be-
tween features and harms the detection performance. In this
paper, we propose Multi-3D-Memory (M3DM ), a novel
multimodal anomaly detection method with hybrid fusion
scheme: firstly, we design an unsupervised feature fusion
with patch-wise contrastive learning to encourage the in-
teraction of different modal features; secondly, we use a
decision layer fusion with multiple memory banks to avoid
loss of information and additional novelty classifiers to
make the final decision. We further propose a point fea-
ture alignment operation to better align the point cloud and
RGB features. Extensive experiments show that our multi-
modal industrial anomaly detection model outperforms the
state-of-the-art (SOTA) methods on both detection and seg-
mentation precision on MVTec-3D AD dataset. Code at
github.com/nomewang/M3DM.
| 1. Introduction
Industrial anomaly detection aims to find the abnormal
region of products and plays an important role in industrial
quality inspection. In industrial scenarios, it’s easy to ac-
quire a large number of normal examples, but defect exam-
ples are rare. Current industrial anomaly detection methods
are mostly unsupervised methods, i.e., only training on nor-
mal examples, and testing on detect examples only during
inference. Moreover, most existing industrial anomaly de-
tection methods [2,9,25,34] are based on 2D images. How-
ever, in the quality inspection of industrial products, human
inspectors utilize both the 3D shape and color characteris-
tics to determine whether it is a defective product, where
*Equal contributions. This work was done when Yue Wang was a intern
at Tencent Youtu Lab.
†Corresponding author.
Colored
Point Clouds
Point Clouds
RGB Image
Ours
Ground
Truth
Cookie Potato Dowel Foam
Tire
Bagel
PatchCore
+ FPFHFigure 1. Illustrations of MVTec-3D AD dataset [3]. The second
and third rows are the input point cloud data and the RGB data.
The fourth and fifth rows are prediction results, and according to
the ground truth, our prediction has more accurate prediction re-
sults than the previous method.
3D shape information is important and essential for correct
detection. As shown in Fig. 1, for cookie and potato, it is
hard to identify defects from the RGB image alone. With
the development of 3D sensors, recently MVTec-3D AD
dataset [3] (Fig. 1) with both 2D images and 3D point cloud
data has been released and facilitates the research on multi-
modal industrial anomaly detection.
The core idea for unsupervised anomaly detection is
to find out the difference between normal representations
and anomalies. Current 2D industrial anomaly detec-
tion methods can be categorized into two categories: (1)
Reconstruction-based methods. Image reconstruction tasks
are widely used in anomaly detection methods [2, 9, 14,
22, 34, 35] to learn normal representation. Reconstruction-
based methods are easy to implement for a single modal
input (2D image or 3D point cloud). But for multimodal in-
puts, it is hard to find a reconstruction target. (2) Pretrained
feature extractor-based methods. An intuitive way to uti-
lize the feature extractor is to map the extracted feature to
a normal distribution and find the out-of-distribution one as
an anomaly. Normalizing flow-based methods [15, 27, 33]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8032
use an invertible transformation to directly construct nor-
mal distribution, and memory bank-based methods [8, 25]
store some representative features to implicitly construct the
feature distribution. Compared with reconstruction-based
methods, directly using a pretrained feature extractor does
not involve the design of a multimodal reconstruction tar-
get and is a better choice for the multimodal task. Besides
that, current multimodal industrial anomaly detection meth-
ods [16, 27] directly concatenate the features of the two
modalities together. However, when the feature dimension
is high, the disturbance between multimodal features will
be violent and cause performance reduction.
To address the above issues, we propose a novel mul-
timodal anomaly detection scheme based on RGB images
and 3D point cloud, named Multi-3D-Memory (M3DM) .
Different from the existing methods that directly concate-
nate the features of the two modalities, we propose a hy-
brid fusion scheme to reduce the disturbance between mul-
timodal features and encourage feature interaction. We pro-
pose Unsupervised Feature Fusion (UFF) to fuse multi-
modal features, which is trained using a patch-wise con-
trastive loss to learn the inherent relation between multi-
modal feature patches at the same position. To encourage
the anomaly detection model to keep the single domain in-
ference ability, we construct three memory banks separately
for RGB, 3D and fused features. For the final decision, we
construct Decision Layer Fusion (DLF) to consider all of
the memory banks for anomaly detection and segmentation.
Anomaly detection needs features that contain both
global and local information, where the local information
helps detect small defects, and global information focuses
on the relationship among all parts. Based on this obser-
vation, we utilize a Point Transformer [20, 36] for the 3D
feature and Vision Transformer [5,11] for the RGB feature.
We further propose a Point Feature Alignment (PFA) opera-
tion to better align the 3D and 2D features.
Our contributions are summarized as follows:
• We propose M3DM, a novel multimodal industrial
anomaly detection method with hybrid feature fusion,
which outperforms the state-of-the-art detection and
segmentation precision on MVTec-3D AD.
• We propose Unsupervised Feature Fusion (UFF) with
patch-wise contrastive loss to encourage interaction
between multimodal features.
• We design Decision Layer Fusion (DLF) utilizing mul-
tiple memory banks for robust decision-making.
• We explore the feasibility of the Point Transformer
in multimodal anomaly detection and propose Point
Feature Alignment (PFA) operation to align the
Point Transformer feature to a 2D plane for high-
performance 3D anomaly detection. |
Wei_CFA_Class-Wise_Calibrated_Fair_Adversarial_Training_CVPR_2023 | Abstract
Adversarial training has been widely acknowledged as
the most effective method to improve the adversarial robust-
ness against adversarial examples for Deep Neural Net-
works (DNNs). So far, most existing works focus on en-
hancing the overall model robustness, treating each class
equally in both the training and testing phases. Although
revealing the disparity in robustness among classes, few
works try to make adversarial training fair at the class
level without sacrificing overall robustness. In this paper,
we are the first to theoretically and empirically investigate
the preference of different classes for adversarial configu-
rations, including perturbation margin, regularization, and
weight averaging. Motivated by this, we further propose
aClass-wise calibrated FairAdversarial training frame-
work, named CFA, which customizes specific training con-
figurations for each class automatically. Experiments on
benchmark datasets demonstrate that our proposed CFA
can improve both overall robustness and fairness notably
over other state-of-the-art methods. Code is available at
https://github.com/PKU-ML/CFA .
| 1. Introduction
Deep Neural Networks (DNNs) have achieved remark-
able success in a variety of tasks, but their vulnerability
against adversarial examples [11, 20] have caused serious
concerns about their application in safety-critical scenar-
ios [6, 15]. DNNs can be easily fooled by adding small,
even imperceptible perturbations to the natural examples.
To address this issue, numerous defense approaches have
been proposed [2, 9, 17, 18, 28], among which Adversarial
Training (AT) [16, 25] has been demonstrated as the most
effective method to improve the model robustness against
such attacks [1, 27]. Adversarial training can be formulated
as the following min-max optimization problem:
min
θE(x,y)∼D max
∥x′−x∥≤ϵL(θ;x′, y), (1)
*Corresponding Author: Yisen Wang ([email protected])where Dis the data distribution, ϵis the margin of pertur-
bation and Lis the loss function, e.g. the cross-entropy
loss. Generally, Projected Gradient Descent (PGD) at-
tack [16] has shown satisfactory effectiveness to find adver-
sarial examples in the perturbation bound B(x, ϵ) ={x′:
∥x′−x∥ ≤ϵ}, which is commonly used in solving the inner
maximization problem in (1):
xt+1= ΠB(x,ϵ)(xt+α·sign(∇xtL(θ;xt, y))), (2)
where Πis the projection function and αcontrols the step
size of gradient ascent. TRADES [30] is another variant of
AT, which adds a regularization term to adjust the trade-off
between robustness and accuracy [22, 24]:
min
θE(x,y)∼D{L(θ;x, y) +βmax
∥x′−x∥≤ϵK(fθ(x), fθ(x′))},
(3)
where K(·)is the KL divergence and βis the robustness
regularization to adjust the robustness-accuracy trade-off.
Although certain robustness has been achieved by AT
and its variants, there still exists a stark difference among
class-wise robustness in adversarially trained models, i.e.,
the model may exhibit strong robustness on some classes
while it can be highly vulnerable on others, as firstly re-
vealed in [4, 21, 29]. This disparity raises the issue of ro-
bustness fairness, which can lead to further safety concerns
of DNNs, as the models that exhibit good overall robustness
may be easily fooled on some specific classes, e.g., the stop
sign in automatic driving. To address this issue, Fair Robust
Learning (FRL) [29] has been proposed, which adjusts the
margin and weight among classes when fairness constraints
are violated. However, this approach only brings limited
improvement on robust fairness while causing a drop on
overall robustness.
In this paper, we first present some theoretical insights on
how different adversarial configurations impact class-wise
robustness, and reveal that strong attacks can be detrimen-
tal to the hard classes (classes that have lower clean accu-
racy). This finding is further empirically confirmed through
evaluations of models trained under various adversarial con-
figurations. Additionally, we observe that the worst robust-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is ident |
Veksler_Test_Time_Adaptation_With_Regularized_Loss_for_Weakly_Supervised_Salient_CVPR_2023 | Abstract
It is well known that CNNs tend to overfit to the training
data. Test-time adaptation is an extreme approach to deal
with overfitting: given a test image, the aim is to adapt the
trained model to that image. Indeed nothing can be closer to
the test data than the test image itself. The main difficulty of
test-time adaptation is that the ground truth is not available.
Thus test-time adaptation, while intriguing, applies to only
a few scenarios where one can design an effective loss func-
tion that does not require ground truth. We propose the first
approach for test-time Salient Object Detection (SOD) in
the context of weak supervision. Our approach is based on
a so called regularized loss function, which can be used for
training CNN when pixel precise ground truth is unavail-
able. Regularized loss tends to have lower values for the
more likely object segments, and thus it can be used to fine-
tune an already trained CNN to a given test image, adapting
to images unseen during training. We develop a regularized
loss function particularly suitable for test-time adaptation
and show that our approach significantly outperforms prior
work for weakly supervised SOD.
| 1. Introduction
A well known problem with CNNs is that they tend to
overfit to the training data. One approach to deal with over-
fitting is test-time adaptation [30]. A model is trained on
the training data, and then fine-tuned for a few epochs dur-
ing test time on a given test image. The main difficulty of
test-time adaptation is that there is no ground truth for the
test image. Thus test-time adaptation has been previously
used for only a few applications [11, 16, 22, 30, 45] where a
suitable loss function can be designed without ground truth.
We propose the first approach for test-time Salient Ob-
ject Detection (SOD). The goal of SOD is to find image
regions that attract human attention. Convolutional Neu-
ral Networks (CNNs) [14, 15] brought a significant break-
through for SOD [18, 28, 46, 47]. Traditional training ofCNNs for SOD requires pixel precise ground truth, but ob-
taining such annotations is a substantial effort. Therefore,
there is an increased interest in semi-supervised [20,23] and
weakly supervised SOD [17, 25, 33, 40, 41, 44]. Weak su-
pervision requires less annotation effort, compared to semi-
supervision. In this paper, we assume image level supervi-
sion, the weakest supervision type, where one provides only
images containing salient objects, with no other annotation.
Most image level weakly supervised SOD ap-
proaches [17, 25, 40, 41, 44] are based on noisy pseudo
labels, constructed from unsupervised SOD methods.
These approaches are hard to modify for test-time adapta-
tion, as at test time, one has only one image to create noisy
pseudo-ground truth samples, but combating noise requires
diverse samples.
Our test-time adaptation is based on the approach in [33].
Different from the other image level supervised SOD meth-
ods, the approach in [33] does not require pseudo labels.
They design a regularized loss1to use for CNN training
without pixel precise annotations. Regularized loss models
class-independent properties of object shapes and is likely
to have lower values for segmentations that separate the ob-
ject from the background. This makes regularized loss ap-
proach particularly suitable for test time adaptation.
We design a regularized loss function tailored for test
time adaptation. The main problem with regularized loss
in [33] is that it may result in a trivial empty solution if hy-
perparameters of the loss function are not chosen correctly.
When training on a large dataset, an occasional empty result
is not a big problem, but when training on a single test im-
age, an empty result is disastrous, catastrophically worsen-
ing the performance. Thus we must develop a hyperparame-
ter setting method that avoids empty solutions. However, in
the absence of ground truth, setting hyperparameter weights
is not trivial. We propose a method for setting hyperparame-
ters specific for each test image such that an empty solution
is avoided. This method is crucial for obtaining good results
1In the context of CNNs, regularization is a term often used to refer to
the norm regularization on the network weights [8]. Here, regularized loss
refers to the loss function on the output of CNN.
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7360
test image ground truth base CNN result
test-time adaptation with regularized loss
base CNN result + dense CRF
test-time dataset adapted CNN resultpost -processingFigure 1. Overview of our approach. Top: test image, ground truth, output of base CNN on the test image, and the result of dense CRF [13]
post-processing of the base CNN result. Bottom: test-time dataset, CNN output on the test image during several test time epochs, and the
final result of the adapted CNN.
during test-time adaptation.
Fig. 1 is an overview of our approach. First we train
CNN for SOD in weakly supervised setting with image
level labels using [33]. The result is called the base CNN.
The top row of Fig. 1 shows a test image, ground truth, and
the result of the base CNN. Given a test image, we create a
small training dataset by augmentation, see Fig. 1, bottom,
left. Then we fine tune the base CNN on the small dataset,
using our version of regularized loss, Fig. 1, bottom, mid-
dle. The resulting CNN is called the adapted CNN.
The result of base CNN (Fig. 1, top, third image) has
many erroneous regions. Sometimes dense CRF [13] is
used for post processing to improve performance. We apply
dense CRF to the output of base CNN (Fig. 1, top, right).
Dense CRF removes small spurious regions but is unable to
remove the large erroneous regions as these are salient with
high confidence according to the base CNN.
In contrast, our approach is able to produce much better
results, Fig. 1, bottom, right. This is because the base CNN
has a high value of regularized loss for this test image. As
the fine tuning proceeds, CNN is forced to find alternative
segmentations with better loss values, resulting in increas-
ingly better segmentations. Unlike CRF post processing,
our adapted CNN is able to remove the large erroneous re-
gions, as their high confidence values according to the base
CNN are ignored, and these regions give higher values of
regularized loss. Both dense CRF and our approach use
CRF models. However, as opposed to post-processing, we
use CRF for CNN supervision, enabling CNN to learn a
better segmentation.
Our experiments on the standard benchmarks show that
test time adaptation significantly improves performance,
achieving the new state-of-art in image level weakly super-
vised SOD.
This paper is organized as follows: Sec. 2 is relatedwork, Sec. 3 explains the approach in [33], Sec. 4 describes
our approach, and Sec. 5 presents experiments.
|
Wang_Compacting_Binary_Neural_Networks_by_Sparse_Kernel_Selection_CVPR_2023 | Abstract
Binary Neural Network (BNN) represents convolution
weights with 1-bit values, which enhances the efficiency of
storage and computation. This paper is motivated by a pre-
viously revealed phenomenon that the binary kernels in suc-
cessful BNNs are nearly power-law distributed: their val-
ues are mostly clustered into a small number of codewords.
This phenomenon encourages us to compact typical BNNs
and obtain further close performance through learning non-
repetitive kernels within a binary kernel subspace. Specifi-
cally, we regard the binarization process as kernel grouping
in terms of a binary codebook, and our task lies in learning
to select a smaller subset of codewords from the full code-
book. We then leverage the Gumbel-Sinkhorn technique to
approximate the codeword selection process, and develop
the Permutation Straight-Through Estimator (PSTE) that
is able to not only optimize the selection process end-to-
end but also maintain the non-repetitive occupancy of se-
lected codewords. Experiments verify that our method re-
duces both the model size and bit-wise computational costs,
and achieves accuracy improvements compared with state-
of-the-art BNNs under comparable budgets.
| 1. Introduction
It is crucial to design compact Deep Neural Networks
(DNNs) which allow the model deployment on resource-
constrained embedded devices, since most powerful DNNs
including ResNets [10] and DenseNets [13] are storage
costly with deep and rich building blocks piled up. Plenty of
approaches have been proposed to compress DNNs, among
which network quantization [15, 43, 45] is able to reduce
memory footprints as well as accelerate the inference speed
by converting full-precision weights to discrete values. Bi-
nary Neural Networks (BNNs) [2, 14] belong to the fam-
ily of network quantization but they further constrict the
parameter representations to binary values ( ±1). In thisway, the model is largely compressed. More importantly,
floating-point additions and multiplications in conventional
DNNs are less required and mostly reduced to bit-wise op-
erations that are well supported by fast inference acceler-
ators [32], particularly when activations are binarized as
well. To some extent, this makes BNNs more computation-
ally efficient than other compression techniques, e.g., net-
work pruning [9,11,25] and switchable models [37,41,42].
Whilst a variety of methods are proposed to improve the
performance of BNNs, seldom is there a focus on discussing
how the learnt binary kernels are distributed in BNNs. A re-
cent work SNN [38] demonstrates that, by choosing typical
convolutional BNN models [27, 31, 32] well trained on Im-
ageNet and displaying the distribution of the 3×3kernels
along all possible 23×3binary values ( a.k.a. codewords),
these kernels nearly obey the power-law distribution: only a
small portion of codewords are activated for the most time.
Such a phenomenon is re-illustrated in Figure 1(b). This
observation motivates SNN to restrict the size of the code-
book by removing those hardly-selected codewords. As a
result, SNN is able to compact BNN further since index-
ing the kernels with a smaller size of codebook results in a
compression ratio of log2(n)/log2(N), where nandNare
separately the sizes of the compact and full codebooks.
However, given that the size of codebook is limited (only
512), the sub-codebook degenerates during training since
codewords are likely to become repetitive. Therefore, we
believe the clustering property of kernels can be further ex-
ploited during the training of BNNs. To do so, we refor-
mulate the binary quantization process as a grouping task
that selects, for each kernel, the nearest codeword from a
binary sub-codebook which is obtained by selecting opti-
mal codewords from the full one. To pursue an optimal
solution and retain the non-repetitive occupancy of the se-
lected codewords, we first convert the sub-codebook se-
lection problem to a permutation learning task. However,
learning the permutation matrix is non-differential since the
permutation matrix is valued with only 0/1 entries. Inspired
by the idea in [29], we introduce the Gumbel-Sinkhorn op-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24374
Codeword indexCodeword index(a) Codebook constructed with sub-vectors (multi-channel codewords)(b) Codebook constructed with kernels (single-channel codewords) (ours)Top-!codewords3264128256Total percentage61.6%75.4%90.2%96.5%Top-!codewords3264128256Total percentage7.6%14.7%28.5%55.3%Percentage (%)
Percentage (%)
Number of different codewords(d) Selection-based optimizationfor binary codewords (ours)Training step (10 )3
Training step (10 )3Number of different codewords(c) Product quantization-basedoptimization for binary codewordsFigure 1. Codebook distributions under different decomposition approaches. Statistics in both sub-figures are collected from the same
BNN model (XNOR-Net [32] upon ResNet-18 [10]) well-trained on ImageNet [4]. In (a), each codeword is a flattened sub-vector (of
size 1×9). In (b), each codeword is a 3×3convolution kernel. The codebook in either sub-figure consists of 29= 512 different
codewords. Upper tables provide the total percentages of the top- nmost frequent codewords. In (c), we observe that the sub-codebook
highly degenerates during training, since codewords tend to be repetitive when being updated independently. While in (d), the diversity of
codewords preserves, which implies the superiority of our selection-based learning.
eration to generate a continuous and differential approxima-
tion of the permutation matrix. During training, we further
develop Permutation Straight-Through Estimator (PSTE), a
novel method that tunes the approximated permutation ma-
trix end-to-end while maintaining the binary property of the
selected codewords. The details are provided in § 3.2 and
§ 3.3. We further provide the complexity analysis in § 3.4.
Extensive results on image classification and object de-
tection demonstrate that our architecture noticeably reduces
the model size as well as the computational burden. For ex-
ample, by representing ResNet-18 with 0.56-bit per weight
on ImageNet, our method brings in 214 ×saving of bit-wise
operations, and 58 ×reduction of the model size. Though
state-of-the-art BNNs have achieved remarkable compres-
sion efficiency, we believe that further compacting BNNs
is still beneficial, by which we can adopt deeper, wider,
and thus more expressive architectures without exceeding
the complexity budget than BNNs. For example, our 0.56-
bit ResNet-34 obtains 1.7% higher top-1 accuracy than the
state-of-the-art BNN on ResNet-18, while its computational
costs are lower and the storage costs are almost the same.
Existing methods [20, 21] (apart from SNN [38]) that
also attempt to obtain more compact models than BNNs are
quite different with ours as will be described in § 2. One
of the crucial points is that their codewords are sub-vectors
from (flattened) convolution weights across multiple chan-
nels, whereas our each codeword corresponds to a com-
plete kernel that maintains the spatial dimensions (weight
and height) of a single channel. The reason why we formu-
late the codebook in this way stems from the observation in
Figure 1(b), where the kernels are sparsely clustered. Dif-
ferently, as shown in Figure 1(a), the codewords are nearly
uniformly activated if the codebook is constructed from flat-
tened sub-vectors, which could because the patterns of the
input are spatially selective but channel-wise uniformly dis-
tributed. It is hence potential that our method may recoverbetter expressivity of BNNs by following this natural char-
acteristic. In addition, we optimize the codewords via non-
repetitive selection from a fixed codebook, which rigorously
ensures the dissimilarity between every two codewords and
thus enables more capacity than the product quantization
method used in [20], as compared in Figure 1(c)(d). On Im-
ageNet with the same backbone, our method exceeds [20]
and [21] by 6.6%and4.5%top-1 accuracies, respectively.
|
Wang_Deep_Arbitrary-Scale_Image_Super-Resolution_via_Scale-Equivariance_Pursuit_CVPR_2023 | Abstract
The ability of scale-equivariance processing blocks
plays a central role in arbitrary-scale image super-
resolution tasks. Inspired by this crucial observation, this
work proposes two novel scale-equivariant modules within
a transformer-style framework to enhance arbitrary-scale
image super-resolution (ASISR) performance, especially in
high upsampling rate image extrapolation. In the feature
extraction phase, we design a plug-in module called Adap-
tive Feature Extractor, which injects explicit scale informa-
tion in frequency-expanded encoding, thus achieving scale-
adaption in representation learning. In the upsampling
phase, a learnable Neural Kriging upsampling operator
is introduced, which simultaneously encodes both relative
distance (i.e., scale-aware) information as well as feature
similarity (i.e., with priori learned from training data) in
a bilateral manner, providing scale-encoded spatial feature
fusion. The above operators are easily plugged into mul-
tiple stages of a SR network, and a recent emerging pre-
training strategy is also adopted to impulse the model’s
performance further. Extensive experimental results have
demonstrated the outstanding scale-equivariance capabil-
ity offered by the proposed operators and our learning
framework, with much better results than previous SOTA
methods at arbitrary scales for SR. Our code is available
athttps://github.com/neuralchen/EQSR .
| 1. Introduction
Arbitrary-scale image super-resolution (ASISR), which
aims at upsampling the low-resolution (LR) images to high-
resolution (HR) counterparts by any proper real-valued
magnifications with one single model, has become one of
the most interesting topics in low-level computer vision
research for its flexibility and practicality. Unfortunately,
compared with fixed-scale SISR models [2,3,11,19,20,22],
existing methods on ASISR [4,9,18,33] usually offer much
lower SR performances (e.g., PSNR), hindering their prac-
*Equal Contribution.
†Corresponding author: Bingbing Ni.
90%92%94%96%98%100%
×2 ×4 ×6
Fixed-Scale ModelOursArbSR
HR GT OursArbSR Bicubic
HR GT OursArbSR Bicubic
HR GT OursArbSR Bicubic
SR
Model×2 ×4 ×6 ×12
-2.8%
-5.4%-3.2%
-6.5%
-8.5%-3.6%
HR GT OursArbSR Bicubic
Figure 1. Scale-equivariance of our network. We compare the
PSNR degradation rate of our method and ArbSR [33]. Taking
the SOTA fixed-scale method HAT [3] as reference, our model
presents a more stable degradation as the scale increases, reflecting
the equivariance of our method. Please enlarge the pdf for details.
tical applications. The major causes of the defects of pre-
vious ASISR methods are due to lack of scale-equivariance
in dealing with scale-varying image features, as explained
in detail as follows.
On the one hand, the model’s backbone should possess
the capability of adaptively processing features according to
the sampling scale in order to achieve scale-equivariance in
the feature extraction phase. To be concrete, since ASISR
models rely on only one single backbone to handle different
scales, designing a scale-equivariant feature learning mod-
ule that extracts, transforms, and pool image information
from adaptively adjusted sampling positions (i.e., accord-
ing to the scaled distance) to obtain scale-equivariant fea-
ture maps from the same source image with different scales
is essential. As shown in Figure 2, we analyze a series of
fixed-scale HAT [3] models and find that the extracted fea-
tures show apparent divergences from the middle of the net-
work to handle different scale factors, demonstrating that
features for different scales should be extracted adaptively.
On the other hand, we also expect the model to possess
suitable scale-equivariant properties in the image/feature
upsampling stage. Namely, the upsampling module should
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1786
be designed to perform adaptive interpolation operations ac-
cording to arbitrary scaling factors. It is worth noting that
the ability to handle out-of-distribution scales (i.e., scales
that the model has never seen in training) is crucial for
ASISR models since the training process is impossible to
cover all possible upsampling scales. However, existing
methods commonly use 3×3convolution layers for up-
sampling, which has been proved to lack scale equivari-
ance by many studies [35–37], leading to insensitivity to
the changes of the scale factor. Some implicit-field-based
methods, such as LIIF [4], adopt a channel-separated MLP
to enhance the scale equivariance; however, additional oper-
ations, including feature unfolding and local ensemble, are
needed, resulting in a cumbersome upsampler. Alias-Free
StyleGAN [12] points out that 1×1convolution could be
regarded as an instance of a continuously E(2)-equivariant
model [34] in image generation, but 1×1receptive field
cannot aggregate the crucial local information for SR.
Motivated by the above analysis, this work proposes
two novel scale-equivariant modules within a transformer-
style framework for enhancing arbitrary-scale image super-
resolution performance. In the feature extraction phase,
we design a novel module called Adaptive Feature Ex-
tractor (AFE), which explicitly injects scale information
in the form of frequency-expanded encoding to modulate
the weights of subsequent convolution. Combined with
the traditional self-attention mechanism, this operator can
be plugged into multiple stages of the feature extraction
sub-network and achieves a large receptive field as well as
good scale-adaption properties in representation learning.
When upsampling, instead of monotonically using pixel-
independent convolutions (e.g., Alias-Free StyleGAN [12],
LIIF [4]), we propose a brand-new paradigm, i.e., Neural
Kriging upsampler, which endows vanilla K×Kconvolu-
tions with highly competitive equivariance while maintain-
ing excellent spatial aggregation. Specifically, the Neural
Kriging upsampler simultaneously encodes geometric in-
formation (i.e., relative position) and feature similarity (i.e.,
prior knowledge learned from training data) in a bilateral
manner, providing scale-encoded spatial feature fusion.
Combining the above modules, we construct a model
with a certain equivariance named EQSR, which can adap-
tively handle different sampling rates. We conduct exten-
sive qualitative and quantitative experiments to verify the
superiority of our method on ASISR benchmarks. Com-
pared with state-of-the-art methods, average PSNRs of
our model have shown significant advantages in both in-
distribution and out-of-distribution cases. Under the ×2and
×3configurations, we surpass the previous SOTA LTE [18]
by 0.33dB (34.83dB v.s. 34.50dB) and 0.35dB (29.76dB
v.s. 29.41dB) on the Urban100 dataset. Under the ×6con-
figuration, we also achieve a large gap of 0.21dB, proving
the effectiveness of our scale-equivariant operator.
(a)20 24 28 32 36×4×3×2
ArbSR Ours HATScale
PSNR
(b)
×2 v.s. ×4×2 v.s. ×3×3 v.s. ×4CKA Similarity
HAT×2 v.s. ×4×2 v.s. ×3×3 v.s. ×4CKA Similarity
×2 v.s. ×4×2 v.s. ×3×3 v.s. ×4CKA Similarity
ArbSR
OursFigure 2. Feature similarity of Different models. We compare the
recent SOTA fixed-scale model HAT [3] and arbitrary-scale model
ArbSR [33]. (a) shows the CKA similarity of ×2/3/4features at
each layer; (b) compares the performance of these methods on the
Urban100 dataset.
|
Wang_LANA_A_Language-Capable_Navigator_for_Instruction_Following_and_Generation_CVPR_2023 | Abstract
Recently, visual-language navigation (VLN) – entailing
robot agents to follow navigation instructions – has shown
great advance. However, existing literature put most empha-
sis on interpreting instructions into actions, only delivering
“dumb ” wayfinding agents. In this article, we devise LANA,
alanguage-capable navigation a gent which is able to not
only execute human-written navigation commands, but also
provide route descriptions to humans. This is achieved by si-
multaneously learning instruction following and generation
with only one single model. More specifically, two encoders,
respectively for route and language encoding, are built and
shared by two decoders, respectively for action prediction
and instruction generation, so as to exploit cross-task know-
ledge and capture task-specific characteristics. Throughout
pretraining and fine-tuning, both instruction following and
generation are set as optimization objectives. We empirically
verify that, compared with recent advanced task-specific
solutions, LANA attains better performances on both in-
struction following and route description, with nearly half
complexity. In addition, endowed with language generation
capability, LANA canexplaintohumanitsbehavioursandas-
sist human’s wayfinding. This work is expected to foster fu-
ture efforts towards building more trustworthy and socially-
intelligent navigation robots.
| 1. Introduction
Developing agents that can interact with humans in natu-
ral language while perceiving and taking actions in their en-
vironments is one of the fundamental goals in artificial intel-
ligence. As a small step towards this target, visual-language
navigation (VLN) [4] – endowing agents to execute natural
language navigation commands – recently received signifi-
cant attention. In VLN space, much work has been done on
language grounding – teaching agents how to relate human
instructions with actions associated with perceptions. How-
ever, there has been far little work [27, 70, 1, 77, 23] on the
reverse side – language generation – teaching agents how to
*Corresponding author: Yi Yang.
Figure 1: L ANA is capable of both instruction following and gen-
eration. Its written report benefits human-robot collaboration, and,
to some extend, can explain its behavior: it takes a wrong action at
step 2 as it mistakes the dining room for bedroom. After gathering
more information at step 3, it changes to the correct direction.
verbalize a vivid description of navigation routes. More criti-
cally, existing VLN literature separately train agents that are
specialized for each single task. As a result, the delivered
agents are either strong wayfinding actors but never talking,
or conversable route instructors but never walking.
This article underlines a fundamental challenge inVLN:
Can we learn a single agent that is capable of both naviga-
tion instruction following and route description creation?
We propose L ANA, a language-capable n avigation a gent,
that is fully aware of such challenge (Fig.1). By simultane-
ously learning instruction grounding and generation, L ANA
formalises human-to-robot androbot-to-human communi-
cation, conveyed using navigation-oriented natural language,
in a unified framework. This is of great importance, because:
i)It completes the necessary communication cycle between
human and agents, and promotes VLN agent’s real-world
utility [58]. For instance, when an agent takes long time to
execute a navigation command, during which sustained hu-
man attention is infeasible and undesirable, the agent should
report its progress [72]. Also, agents are expected to direct
human in agents’ explored areas [81], which is relevant for
search and rescue robots in disaster regions [71, 19], guide
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19048
robots in public spaces [77], and navigation devices for the
visually impaired [36]. ii)Two-way communication is inte-
gral to tight human-robot coordination ( i.e., “I will continue
this way ···”) [7], and boosts human trust in robot [6, 24],
hence increasing the acceptance of navigation robots. iii)De-
veloping the language generation skill makes for more ex-
plainable robots, which can interpret their navigation beha-
viors in a form of human-readable route descriptions.
Technically, L ANA is built as a Transformer-based, multi-
task learning framework. The network consists of two uni-
modal encoders respectively for language and route encod-
ing, and two multimodal decoders respectively for route-to-
instruction and instruction-to-route translation, based on the
two encoders. The whole network is end-to-end learned with
the tasks of both instruction grounding and generation, dur-
ing both pretraining and fine-tuning phases. Taken all these
together, L ANA provides a unified, powerful framework that
explores both task-specific and cross-task knowledge at the
heart of model design and network training. L ANA thus can
better comprehend linguistic cues ( e.g., words, phrases, and
sentences), visual perceptions, actions over long temporal
horizons and their relationships, even in the absence of ex-
plicit supervision, and eventually benefits both the two tasks.
We conduct extensive experiments on three famous VLN
datasets ( i.e., R2R [4], R4R [38], REVERIE [62]), for both
instruction following and generation, giving a few intriguing
points: First , LANA successfully solves the two tasks using
only one single agent, without switching between different
models. Second , with an elegant and integrated architecture,
LANA performs comparable, or even better than recent top-
leading, task-specific alternatives. Third , compared to learn-
ing each task individually, training L ANA on the two tasks
jointly obtains better performance with reduced complexity
and model size, confirming the advantage of L ANA in cross-
task relatedness modeling and parameter efficiency. Forth ,
LANA can explain to human its behavior by verbally describ-
ing its navigation routes. L ANA can be essentially viewed as
an explainable VLN robot, equipped with a self-adaptively
trained language explainer. Fifth , subjective analyses reveal
our linguistic outputs are of higher quality than the baselines
but still lag behind human-generated utterances. While there
is still room for improvement, our results shed light on a pro-
mising direction of future VLN research, with great poten-
tial for explainable navigation agents and robot applications.
|
Wang_Rethinking_the_Learning_Paradigm_for_Dynamic_Facial_Expression_Recognition_CVPR_2023 | Abstract
Dynamic Facial Expression Recognition (DFER) is a
rapidly developing field that focuses on recognizing facial
expressions in video format. Previous research has con-
sidered non-target frames as noisy frames, but we pro-
pose that it should be treated as a weakly supervised prob-
lem. We also identify the imbalance of short- and long-
term temporal relationships in DFER. Therefore, we in-
troduce the Multi-3D Dynamic Facial Expression Learn-
ing (M3DFEL) framework, which utilizes Multi-Instance
Learning (MIL) to handle inexact labels. M3DFEL gen-
erates 3D-instances to model the strong short-term tem-
poral relationship and utilizes 3DCNNs for feature extrac-
tion. The Dynamic Long-term Instance Aggregation Mod-
ule (DLIAM) is then utilized to learn the long-term temporal
relationships and dynamically aggregate the instances. Our
experiments on DFEW and FERV39K datasets show that
M3DFEL outperforms existing state-of-the-art approaches
with a vanilla R3D18 backbone. The source code is avail-
able at https://github.com/faceeyes/M3DFEL.
| 1. Introduction
Facial expressions are essential in communication [26,
27,45]. Understanding the emotions of others through their
facial expressions is critical during conversations. Thus,
automated recognition of facial expressions is a significant
challenge in various fields, such as human-computer inter-
action (HCI) [25, 34], mental health diagnosis [12], driver
fatigue monitoring [24], and metahuman [6]. While signif-
icant progress has been made in Static Facial Expression
Recognition (SFER) [23, 43, 44, 55], there is increasing at-
tention on Dynamic Facial Expression Recognition.
*Both author contributed equally to this work. Work done during
Hanyang Wang’s internship at Tencent Youtu Lab and Bo Li is the project
lead.
†Corresponding authors. [email protected], [email protected],
[email protected]
M3DFELTarget Expr ession Non-T arget Expr essionFigure 1. In-the-wild Dynamic Facial Expressions. In the first row
of images, the subject appear predominantly Neutral , yet the video
is labeled as happy without specifying the exact moment when the
emotion is expressed. In the second row, the emotion is evident
from the perspective of a few figures, but any single one of them
is noisy and unclear. In the third row, all frames appear Neutral ,
but a closer analysis of facial movement over time reveals a rising
of the corner of the mouth, indicating a smile.
With the availability of large-scale in-the-wild datasets
like DFEW [11] and FERV39K [46], several methods have
been proposed for DFER [21, 22, 31, 47, 54]. Previous
works [31, 54] have simply applied general video under-
standing methods to recognize dynamic facial expressions.
Later on, Li et al. [22] observe that DFER contains a large
number of noisy frames and propose a dynamic class token
and a snippet-based filter to suppress the impact of these
frames. Li et al. [21] propose an Intensity Aware Loss to
account for the large intra-class and small inter-class differ-
ences in DFER and force the network to pay extra atten-
tion to the most confusing class. However, we argue that
DFER requires specialized designs rather than being con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17958
sidered a combination of video understanding and SFER.
Although these works [21, 22, 47] have identified some is-
sues in DFER, their models have only addressed them in a
rudimentary manner.
Firstly, these works fail to recognize that the existence of
non-target frames in DFER is actually caused by weak su-
pervision. When collecting large-scale video datasets, an-
notating the precise location of labels is labor-intensive and
challenging. A dynamic facial expression may contain a
change between non-target and target emotions, as shown in
Figure 1. Without a location label that can guide the model
to ignore the irrelevant frames and focus on the target, mod-
els are likely to be confused by the inexact label. Therefore,
modeling these non-target frames as noisy frames directly is
superficial, and the underlying weakly supervised problem
remains unsolved.
Secondly, the previous works directly follow to use se-
quence models without a dedicated design for DFER. How-
ever, we find that there is an imbalance between short-
and long-term temporal relationships in DFER. For exam-
ple, some micro-expressions may occur within a short clip,
while some facial movements between expressions may dis-
rupt individual frames, as shown in Figure 1. In contrast,
there is little temporal relationship between a Happy face
at the beginning of a video and another Happy face at the
end. Therefore, neither modeling the entire temporal rela-
tionship nor using completely time-irrelevant aggregation
methods is suitable for DFER. Instead, a method should
learn to model the strong short-term temporal relationship
and the weak long-term temporal relationship differently.
To address the first issue, we suggest using weakly su-
pervised strategies to train DFER models instead of treating
non-target frames as noisy frames. Specifically, we propose
modeling DFER as a Multi-Instance Learning (MIL) prob-
lem, where each video is considered as a bag containing
a set of instances. In this MIL framework, we disregard
non-target emotions in a video and only focus on the target
emotion. However, most existing MIL methods are time-
independent, which is unsuitable for DFER. Therefore, a
dedicated MIL framework for DFER is necessary to address
the imbalanced short- and long-term temporal relationships.
The M3DFEL framework proposed in this paper is de-
signed to address the imbalanced short- and long-term tem-
poral relationships and the weakly supervised problem in
DFER in a unified manner. It uses a combination of 3D-
Instance and R3D18 models to enhance short-term tem-
poral learning. Once instance features are extracted, they
are fed into the Dynamic Long-term Instance Aggregation
Module (DLIAM), which aggregates the features into a
bag-level representation. The DLIAM is specifically de-
signed to capture long-term temporal relationships between
instances. Additionally, the Dynamic Multi-Instance Nor-
malization (DMIN) is employed to maintain temporal con-sistency at both the bag-level and instance-level by perform-
ing dynamic normalization.
Overall, our contributions can be summarized as follows:
• We propose a weakly supervised approach to model
Dynamic Facial Expression Recognition (DFER) as
a Multi-Instance Learning (MIL) problem. We also
identify an imbalance between short- and long-term
temporal relationships in DFER, which makes it inap-
propriate to model the entire temporal relationship or
use time-irrelevant methods.
• We propose the Multi-3D Dynamic Facial Expression
Learning (M3DFEL) framework to provide a unified
solution to the weakly supervised problem and model
the imbalanced short- and long-term temporal relation-
ships in DFER.
• We conduct extensive experiments on DFEW and
FERV39K, and our proposed M3DFEL achieves state-
of-the-art results compared with other methods, even
when using a vanilla R3D18 backbone. We also con-
duct visualization experiments to analyze the perfor-
mance of M3DFEL and uncover unsolved problems.
|
Wang_Detecting_Everything_in_the_Open_World_Towards_Universal_Object_Detection_CVPR_2023 | Abstract
In this paper, we formally address universal object de-
tection, which aims to detect every scene and predict ev-
ery category. The dependence on human annotations, the
limited visual information, and the novel categories in the
open world severely restrict the universality of traditional
detectors. We propose UniDetector , a universal object de-
tector that has the ability to recognize enormous categories
in the open world. The critical points for the universal-
ity of UniDetector are: 1) it leverages images of multi-
ple sources and heterogeneous label spaces for training
through the alignment of image and text spaces, which guar-
antees sufficient information for universal representations.
2) it generalizes to the open world easily while keeping the
balance between seen and unseen classes, thanks to abun-
dant information from both vision and language modali-
ties. 3) it further promotes the generalization ability to
novel categories through our proposed decoupling train-
ing manner and probability calibration. These contribu-
tions allow UniDetector to detect over 7k categories, the
largest measurable category size so far, with only about
500 classes participating in training. Our UniDetector be-
haves the strong zero-shot generalization ability on large-
vocabulary datasets - it surpasses the traditional supervised
baselines by more than 4% on average without seeing any
corresponding images. On 13 public detection datasets with
various scenes, UniDetector also achieves state-of-the-art
performance with only a 3% amount of training data.1
| 1. Introduction
Universal object detection aims to detect everything in
every scene. Although existing object detectors [18, 31, 42,
*corresponding author
1Codes are available at https://github.com/zhenyuw16/UniDetector.
images from source 1
universal object detectorbus
busbus
car
accordianaccordian
images of multiple sources
generalizing to the open world
camera
knee pad
knee pad
headquarterlabel space: person, bus, car ...
person
surfboardimages from source 2
person
label space: person, camera, cue...
person
cueimages from source 3
label space: person, man, accordian ...
man
man
detecting
every scene
detecting
every category
dockdockboat
fish
fish
persondog
salmon fishbroccoli…
…shared categories
person, car,
bus, cat,
chair …source 1 specific categories
surfboard, sports ball …
source 1 specific categories
surfboard …source 2 specific categories
camera, cue, sneakers …
source 3 specific categories
man, accordian , helicopter …heterogeneous
label spaces
person
person
ceiling
radiatorFigure 1. Illustration for the universal object detector. It aims to
detect every category in every scene and should have the ability to
utilize images of multiple sources with heterogeneous label spaces
for training and generalize to the open world for inference.
43] have made remarkable progress, they heavily rely on
large-scale benchmark datasets [12, 32]. However, object
detection varies in categories and scenes ( i.e., domains).
In the open world, where significant difference exists com-
pared to existing images and unseen classes appear, one has
to reconstruct the dataset again to guarantee the success of
object detectors, which severely restricts their open-world
generalization ability. In comparison, even a child can gen-
eralize well rapidly in new environments. As a result, uni-
versality becomes the main gap between AI and humans.
Once trained, a universal object detector can directly work
in unknown situations without any further re-training, thus
significantly approaching the goal of making object detec-
tion systems as intelligent as humans.
A universal object detector should have the following
two abilities. First, it should utilize images of multiple
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11433
sources and heterogeneous label spaces for training . Large-
scale collaborative training in classification and localization
is required to guarantee that the detector can gain sufficient
information for generalization. Ideal large-scale learning
needs to contain diversified types of images as many as pos-
sible with high-quality bounding box annotations and large
category vocabularies. However, restricted by human anno-
tators, this cannot be achieved. In practice, unlike small vo-
cabulary datasets [12,32], large vocabulary datasets [17,23]
tend to be noisily annotated, sometimes even with the incon-
sistency problem. In contrast, specialized datasets [8,55,70]
only focus on some particular categories. To cover ade-
quate categories and scenes, the detector needs to learn from
all the above images, from multiple sources with heteroge-
neous label spaces, so that it can learn comprehensive and
complete knowledge for universality. Second, it should gen-
eralize to the open world well . Especially for novel classes
that are not annotated during training, the detector can still
predict the category tags without performance degradation.
However, pure visual information cannot achieve the pur-
pose since complete visual learning demands human anno-
tations for fully-supervised learning.
In this paper, we formally address the task of univer-
sal object detection. To realize the above two abilities of
the universal object detector, two corresponding challenges
should be solved. The first one is about training with multi-
source images. Images collected from different sources are
associated with heterogeneous label spaces. Existing detec-
tors are only able to predict classes from one label space,
and the dataset-specific taxonomy and annotation inconsis-
tency among datasets make it hard to unify multiple het-
erogeneous label spaces. The second one is about novel
category discrimination. Motivated by the recent success
of image-text pre-training [20, 39, 58], we leverage their
pre-trained models with language embeddings for recogniz-
ing unseen categories. However, fully-supervised training
makes the detector focus on categories that appear during
training. At the inference time, the model will be biased to-
wards base classes and produce under-confident predictions
for novel classes. Although language embeddings make it
possible to predict novel classes, the performance of them
is still far less than that of base categories.
We propose UniDetector, a universal object detection
framework, to address the above two problems. With the
help of the language space, we first investigate possible
structures to train the detector with heterogeneous label
spaces and discover that the partitioned structure promotes
feature sharing and avoids label conflict simultaneously.
Next, to exploit the generalization ability to novel classes
of the region proposal stage, we decouple the proposal gen-
eration stage and RoI classification stage instead of training
them jointly. Such a training paradigm well leverages their
characteristics and thus benefits the universality of the de-tector. Under the decoupling manner, we further present
a class-agnostic localization network (CLN) for producing
generalized region proposals. Finally, we propose probabil-
ity calibration to de-bias the predictions. We estimate the
prior probability of all categories, then adjust the predicted
category distribution according to the prior probability. The
calibration well improves the performance of novel classes.
Our main contributions can be summarized as follows:
• We propose UniDetector, a universal detection frame-
work that empowers us to utilize images of heteroge-
neous label spaces and generalize to the open world.
To the best of our knowledge, this is the first work to
formally address universal object detection.
• Considering the difference of generalization ability in
recognizing novel classes, we propose to decouple the
training of proposal generation and RoI classification
to fully explore the category-sensitive characteristics.
• We propose to calibrate the produced probability,
which balances the predicted category distribution and
raises the self-confidence of novel categories.
Extensive experiments demonstrate the strong universal-
ity of UniDetector. It recognizes the most measurable cat-
egories. Without seeing any image from the training set,
our UniDetector achieves a 4% higher AP on existing large-
vocabulary datasets than fully-supervised methods. Besides
the open-world task, our UniDetector achieves state-of-the-
art results in the closed world - 49.3% AP on COCO with a
pure CNN model, ResNet50, and the 1 ×schedule.
|
Wu_Neural_Fourier_Filter_Bank_CVPR_2023 | Abstract
We present a novel method to provide efficient and highly
detailed reconstructions. Inspired by wavelets, we learn a
neural field that decompose the signal both spatially and
frequency-wise. We follow the recent grid-based paradigm
for spatial decomposition, but unlike existing work, encour-
age specific frequencies to be stored in each grid via Fourier
features encodings. We then apply a multi-layer perceptron
with sine activations, taking these Fourier encoded features
in at appropriate layers so that higher-frequency compo-
nents are accumulated on top of lower-frequency compo-
nents sequentially, which we sum up to form the final out-
put. We demonstrate that our method outperforms the state
of the art regarding model compactness and convergence
speed on multiple tasks: 2D image fitting, 3D shape recon-
struction, and neural radiance fields. Our code is available
athttps://github.com/ubc-vision/NFFB .
| 1. Introduction
Neural fields [ 59] have recently been shown to be highly
effective for various tasks ranging from 2D image com-pression [ 10,64], image translation [ 4,49], 3D reconstruc-
tion [ 41,48], to neural rendering [ 1,36,37]. Since the in-
troduction of early methods [ 36,38,48], efforts have been
made to make neural fields more efficient and scalable.
Among various extensions, we are interested in two partic-
ular directions: those that utilize spatial decomposition in
the form of grids [ 7,37,51] that allow fast training and level
of detail; and those that encode the inputs to neural fields
with high-dimensional features via frequency transforma-
tion such as periodic sinusoidal representations [ 36,47,53]
that fight the inherent bias of neural fields that is to-
wards low-frequency data [ 53]. The former drastically re-
duced the training time allowing various new application
areas [ 11,52,58,60], while the latter has now become a
standard operation when applying neural fields.
While these two developments have become popular, a
caveat in existing works is that they do not consider the two
together—all grids are treated similarly and interpreted to-
gether by a neural network. We argue that this is an impor-
tant oversight that has a critical outcome. For a model to
be efficient and accurate, different grid resolutions should
focus on different frequency components that are properly
localized. While existing grid methods that naturally local-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14153
+++XXXLow Frequency FilterHigh Frequency FilterFigure 2. A wavelet-inspired framework – In our framework,
given a position x, low- and high-frequency filters are used to de-
compose the signal, which is then reconstructed by accumulating
them and using the intermediate outputs as shown. Here, we uti-
lize a multi-scale grid to act as if they store these high-frequency
filtering outcomes at various spatially decomposed locations.
ize signals—can learn to perform this frequency decompo-
sition, relying purely on learning may lead to sub-optimal
results as shown in Fig. 1. This is also true when locality
is not considered, as shown by the SIREN [ 47] example.
Explicit consideration of both together is hence important.
This caveat remains true even for methods that utilize
both grids and frequency encodings for the input coordi-
nates [ 37] as grids and frequency are not linked, and it is up
to the deep networks to find out the relationship between the
two. Thus, there has also been work that focuses on jointly
considering both space and frequency [ 18,34], but these
methods are not designed with multiple scales in mind thus
single-scale and are designed to be non-scalable. In other
words, they can be thought of as being similar to short-time
Fourier transform in signal processing.
Therefore, in this work, we propose a novel neural field
framework that decomposes the target signal in both space
and frequency domains simultaneously, analogous to the
traditional wavelet decomposition [ 46]; see Fig. 1. Specifi-
cally, a signal is decomposed jointly in space and frequency
through low- and high-frequency filters as shown in Fig. 2.
Here, our core idea is to realize these filters conceptually
as a neural network. We implement the low-frequency path
in the form of Multi-Layer Perceptrons (MLP), leveraging
their frequency bias [ 53]. For the high-frequency compo-
nents, we implement them as lookup operations on grids, as
the grid features can explicitly enforce locality over a small
spatial area and facilitate learning of these components.
This decomposition is much resemblant of filter banks in
signal processing, thus we name our method neural Fourier
filter bank.
In more detail, we utilize the multi-scale grid structure as
in [20,37,51], but with a twist—we apply frequency encod-
ing in the form of Fourier Features just before the grid fea-
tures are used. By doing so, we convert the linear change in
grid features that arise from bilinear/trilinear interpolation
to appropriate frequencies that should be learned at eachscale level. We then compose these grid features together
through an MLP with sine activation functions, which takes
these features as input at each layer, forming a pipeline that
sequentially accumulates higher-frequency information as
composition is performed as shown in Fig. 2. To facilitate
training, we initialize each layer of the MLP with the target
frequency band in mind. Finally, we sum up all intermedi-
ate outputs together to form the estimated field value.
We demonstrate the effectiveness of our method under
three different tasks: 2D image fitting, 3D shape reconstruc-
tion, and Neural Radiance Fields (NeRF). We show that our
method achieves a better trade-off between the model com-
pactness versus reconstruction quality than the state of the
arts. We further perform an extensive ablation study to ver-
ify where the gains are coming from.
To summarize, our contributions are as follows:
•we propose a novel framework that decomposes the mod-
eled signal both spatially and frequency-wise;
•we show that our method achieves better trade-off be-
tween quality and memory on 2D image fitting, 3D shape
reconstruction, and Neural Radiance Fields (NeRF);
•we provide an extensive ablation study shedding insight
into the details of our method.
|
Xue_Freestyle_Layout-to-Image_Synthesis_CVPR_2023 | Abstract
Typical layout-to-image synthesis (LIS) models gener-
ate images for a closed set of semantic classes, e.g., 182
common objects in COCO-Stuff. In this work, we explore
the freestyle capability of the model, i.e., how far can it
generate unseen semantics (e.g., classes, attributes, and
styles) onto a given layout, and call the task Freestyle
LIS (FLIS). Thanks to the development of large-scale pre-
trained language-image models, a number of discrimina-
tive models (e.g., image classification and object detection)
trained on limited base classes are empowered with the
*Corresponding author. Part of this work was done when Han Xue
served as a visiting Ph.D. student at Singapore Management University.ability of unseen class prediction. Inspired by this, we opt
to leverage large-scale pre-trained text-to-image diffusion
models to achieve the generation of unseen semantics. The
key challenge of FLIS is how to enable the diffusion model
to synthesize images from a specific layout which very likely
violates its pre-learned knowledge, e.g., the model never
sees “a unicorn sitting on a bench” during its pre-training.
To this end, we introduce a new module called Rectified
Cross-Attention (RCA) that can be conveniently plugged
in the diffusion model to integrate semantic masks. This
“plug-in” is applied in each cross-attention layer of the
model to rectify the attention maps between image and text
tokens. The key idea of RCA is to enforce each text token to
act on the pixels in a specified region, allowing us to freely
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14256
put a wide variety of semantics from pre-trained knowledge
(which is general) onto the given layout (which is specific).
Extensive experiments show that the proposed diffusion net-
work produces realistic and freestyle layout-to-image gen-
eration results with diverse text inputs, which has a high po-
tential to spawn a bunch of interesting applications. Code
is available at https://github.com/essunny310/FreestyleNet.
| 1. Introduction
Layout-to-image synthesis (LIS) is one of the prevailing
topics in the research of conditional image generation. It
aims to generate complex scenes where a user requires fine
controls over the objects appearing in a scene. There are
different types of layouts including bboxes+classes [13], se-
mantic masks+classes [29, 37], key points+attributes [33],
skeletons [26], to name a few. In this work, we focus on us-
ing semantic masks to control object shapes and positions,
and using texts to control classes, attributes, and styles. We
aim to offer the user the most fine-grained controls on out-
put images. For example, in Figure 1 (2nd column), the
train and the cat are generated onto the right locations de-
fined by the input semantic layouts, and they should also be
aligned well with their surroundings.
However, most existing LIS models [24, 29, 37, 49, 52]
are trained on a specific dataset from scratch. Therefore,
the generated images are limited to a closed set of seman-
tic classes, e.g., 182 objects on COCO-Stuff [3]. While in
this paper, we explore the freestyle capability of the LIS
models for the generation of unseen classes (including their
attributes and styles) onto a given layout. For instance, in
Figure 1 (5th column), our model stretches and squeezes a
“warehouse” into a mask of “train”, and the “warehouse”
is never seen in the training dataset of LIS. In addition, it
is also able to introduce novel attributes and styles into the
synthesized images (in the 3rd and 4th columns of Figure 1).
The freestyle property aims at breaking the in-distribution
limit of LIS, and it has a high potential to enlarge the appli-
cability of LIS results to out-of-distribution domains such as
data augmentation in open-set or long-tailed semantic seg-
mentation tasks.
To overcome the in-distribution limit, large-scale pre-
trained language-image models ( e.g., CLIP [31]) have
shown the ability of learning general knowledge on an enor-
mous amount of image-text pairs. The learned knowledge
enables a multitude of discriminative methods trained with
limited base classes to predict unseen classes. For instance,
the trained models in [31,51,60] based on pre-trained CLIP
are able to do zero-shot or few-shot classification, which is
achieved by finding the most relevant text prompt ( e.g.,a
photo of a cat ) given an image. However, it is non-trivial
to apply similar ideas to generative models. In classifica-
tion, simple texts ( e.g.,a photo of a cat ) can be convenientlymapped to labels ( e.g.,cat). It is, however, not intuitive to
achieve this mapping from compositional texts ( e.g.,a train
is running on the railroad with grass on both sides ) to syn-
thesized images. Thanks to the development of pre-training
in image generation tasks, we opt to leverage large-scale
pre-trained language-image models to the downstream gen-
eration tasks.
To this end, we propose to leverage the large-scale pre-
trained text-to-image diffusion model to achieve the LIS of
unseen semantics. These diffusion models are powerful in
generating images of high quality that are in line with the
input texts. This can provide us with a generative prior
with a wide range of semantics. However, it is challeng-
ing to enable the diffusion model to synthesize images from
a specific layout which very likely violates its pre-trained
knowledge, e.g., the model never sees “a unicorn sitting on a
bench” during its pre-training. Textual semantics should be
correctly arranged in space without impairing their expres-
siveness. A seminal attempt, PITI [48], learns an encoder to
map the layout to the textual space of a pre-trained text-to-
image model. Although it demonstrates improved quality of
the generated images, PITI has two main drawbacks. First,
it abandons the text encoder of the pre-trained model, losing
the ability to freely control the generation results using dif-
ferent texts. Second, its introduced layout encoder implic-
itly matches the space of the pre-trained text encoder and
hence incurs inferior spatial alignment with input layouts.
In this paper, we propose a simple yet effective frame-
work for the suggested freestyle layout-to-image synthesis
(FLIS) task. We introduce a new module called Rectified
Cross-Attention (RCA) and plug it into a pre-trained text-
to-image diffusion model. This “plug-in” is applied in each
cross-attention layer of the diffusion model to integrate the
input layout into the generation process. In particular, to
ensure the semantics (described by text) appear in a region
(specified by layout), we find that image tokens in the re-
gion should primarily aggregate information from the cor-
responding text tokens. In light of this, the key idea of RCA
is to utilize the layout to rectify the attention maps com-
puted between image and text tokens, allowing us to put
desired semantics from the text onto a layout. In addition,
RCA does not introduce any additional parameters into the
pre-trained model, making our framework an effective so-
lution to freestyle layout-to-image synthesis.
As shown in Figure 1, the proposed diffusion network
(FreestyleNet) allows the freestyle synthesis of high-fidelity
images with novel semantics, including but not limited to:
synthesizing unseen objects, binding new attributes to an
object, and rendering the image with various styles. Our
main contributions can be summarized as follows:
• We propose a novel LIS task called FLIS. In this task,
we exploit large-scale pre-trained text-to-image diffu-
sion models with layout and text as conditions.
14257
• We introduce a parameter-free RCA module to plug
in the diffusion model, allowing us to generate images
from a specified layout effectively while taking full ad-
vantage of the model’s generative priors.
• Extensive experiments demonstrate the capability of
our approach to translate a layout into high-fidelity im-
ages with a wide range of novel semantics, which is
beyond the reach of existing LIS models.
|
Xiong_Neural_Map_Prior_for_Autonomous_Driving_CVPR_2023 | Abstract
High-definition (HD) semantic maps are crucial for au-
tonomous vehicles navigating urban environments. Tra-
ditional offline HD maps, created through labor-intensive
manual annotation processes, are both costly and incapable
of accommodating timely updates. Recently, researchers
have proposed inferring local maps based on online sensor
observations; however, this approach is constrained by the
sensor perception range and is susceptible to occlusions. In
this work, we propose Neural Map Prior (NMP) , a neu-
ral representation of global maps that facilitates automatic
global map updates and improves local map inference per-
formance. To incorporate the strong map prior into local
map inference, we employ cross-attention that dynamically
captures correlations between current features and prior
features. For updating the global neural map prior, we
use a learning-based fusion module to guide the network
in fusing features from previous traversals. This design
allows the network to capture a global neural map prior
during sequential online map predictions. Experimental re-
sults on the nuScenes dataset demonstrate that our frame-
work is highly compatible with various map segmentation
and detection architectures and considerably strengthens
map prediction performance, even under adverse weather
conditions and across longer horizons. To the best of our
knowledge, this represents the first learning-based system
for constructing a global map prior.
| 1. Introduction
Autonomous vehicles require high-definition (HD) se-
mantic maps to accurately predict the future trajectories
of other agents and safely navigate urban streets. How-
ever, most autonomous vehicles depend on labor-intensive
and costly pre-annotated offline HD maps, which are con-
structed through a complex pipeline involving multi-trip
LiDAR scanning with survey vehicles, global point cloud
alignment, and manual map element annotation. Despite
their high precision, the scalability of these offline mapping
*Corresponding at: [email protected] .
SurroundingCameraImagesrefinedBEVfeaturesGlobalNeuralMapPriorNeuralMapPrior(Ours)
Ego LocationQueryFusion
Update
SurroundingCameraImagesBEVfeaturesOnlineLocal Map Prediction
LocalSemanticMap
LiDARScansIMUGPS
Globally ConsistentPoint Cloud
Global Semantic MapData CollectionOfflineGlobal MapConstructionAlignmentManualAnnotation
EncoderDecoder
EncoderDecoder
LocalSemanticMap
OnlineLocalMapInferenceOfflineGlobalMapPriorUpdate
PriorfeaturesMapTileFigure 1. Comparison of semantic map construction methods.
Traditional offline semantic mapping pipelines (the first row) in-
volve a complex manual annotation pipeline and do not support
timely map updates. Online HD semantic map learning methods
(the second row) rely entirely on onboard sensor observations and
are susceptible to occlusions. We propose the Neural Map Prior
(NMP, the third row), an innovative neural representation of global
maps designed to aid onboard map prediction. NMP is incremen-
tally updated as it continuously integrates new observations from
a fleet of autonomous vehicles.
solutions is limited, and they do not support timely updates
when road conditions change. Consequently, autonomous
vehicles may rely on out-of-date maps, negatively impact-
ing driving safety. Recent research has explored alterna-
tive methods for learning HD semantic maps using onboard
sensor observations, such as camera images and LiDAR
point clouds [11, 13, 15]. These methods typically use deep
learning techniques to infer map elements in real-time, ad-
dressing the map update issue associated with offline maps.
However, the quality of the inferred maps is generally infe-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17535
rior to pre-constructed global maps and may further deterio-
rate under adverse weather conditions and occluded scenar-
ios. The comparison of different semantic map construction
methods is illustrated in Figure 1.
In this work, we propose Neural Map Prior (NMP), a
novel hybrid mapping solution that combines the best of
both worlds. NMP leverages neural representations to build
and update a global map prior, thereby improving map in-
ference performance for autonomous cars. The NMP pro-
cess consists of two primary steps: global map prior up-
date andlocal map inference . The global map prior is a
sparsely tiled neural representation, with each tile corre-
sponding to a specific real-world location. It is automat-
ically developed by aggregating data from a fleet of self-
driving cars. Onboard sensor data and the global map prior
are then integrated into the local map inference process,
which subsequently refines the map prior. These procedures
are interconnected in a feedback loop that grows stronger as
more data is collected from the vast number of vehicles nav-
igating the roads daily. One example is shown in Figure 2.
Technically, the global neural map prior is defined as
sparse map tiles initialized from an empty state. For each
online observation from an autonomous vehicle, a neural
network encoder first extracts local Birds-Eye View (BEV)
features. These features are then refined using the corre-
sponding BEV prior feature derived from the global NMP’s
map tile. The improved BEV features enable us to infer the
local semantic map and update the global NMP. As the ve-
hicles traverse through the various scenes, the local map in-
ference phase and the global map prior update step mutually
reinforce each other, improving the quality of the predicted
local semantic map and maintaining a more complete and
up-to-date global NMP.
We demonstrate that NMP can be readily applied to var-
ious state-of-the-art HD semantic map learning methods to
enhance accuracy. Experiments on the public nuScenes
dataset reveal that by integrating NMP with cutting-edge
map learning techniques, our pipeline improves perfor-
mance by + 4.32 mIoU for HDMapNet, + 5.02 mIoU for
LSS, + 5.50 mIoU for BEVFormer, and + 3.90 mAP for Vec-
torMapNet.
To summarize, our contributions are as follows:
1. We propose a novel mapping paradigm named Neural
Map Prior that combines offline global map mainte-
nance and online local map inference, while local in-
ference demands similar computational and memory
resources as previous single-frame systems.
2. We propose simple and efficient current-to-prior atten-
tion and GRU modules adaptable to mainstream HD
semantic map learning methods and boost their map
prediction results.
3. We evaluate our method on the nuScenes datasetacross different map elements and four map segmen-
tation/detection architectures and demonstrate signif-
icant and consistent improvements. Moreover, our
findings show notable progress in challenging situ-
ations involving adverse weather conditions and ex-
tended perception ranges.
|
Xue_SFD2_Semantic-Guided_Feature_Detection_and_Description_CVPR_2023 | Abstract
Visual localization is a fundamental task for various
applications including autonomous driving and robotics.
Prior methods focus on extracting large amounts of often
redundant locally reliable features, resulting in limited ef-
ficiency and accuracy, especially in large-scale environ-
ments under challenging conditions. Instead, we propose
to extract globally reliable features by implicitly embedding
high-level semantics into both the detection and descrip-
tion processes. Specifically, our semantic-aware detector is
able to detect keypoints from reliable regions (e.g. build-
ing, traffic lane) and suppress unreliable areas (e.g. sky,
car) implicitly instead of relying on explicit semantic labels.
This boosts the accuracy of keypoint matching by reducing
the number of features sensitive to appearance changes and
avoiding the need of additional segmentation networks at
test time. Moreover, our descriptors are augmented with
semantics and have stronger discriminative ability, pro-
viding more inliers at test time. Particularly, experiments
on long-term large-scale visual localization Aachen Day-
Night and RobotCar-Seasons datasets demonstrate that our
model outperforms previous local features and gives com-
petitive accuracy to advanced matchers but is about 2 and
3 times faster when using 2k and 4k keypoints, respectively.
Code is available at https://github.com/feixue94/sfd2.
| 1. Introduction
Visual localization is key to various applications includ-
ing autonomous driving and robotics. Structure-based al-
gorithms [54, 57, 64, 69, 73, 79] involving mapping and lo-
calization processes still dominate in large-scale localiza-
tion. Traditionally, handcrafted features ( e.g. SIFT [3, 35],
ORB [53]) are widely used. However, these features are
mainly based on statistics of gradients of local patches and
thus are prone to appearance changes such as illumina-
tion and season variations in the long-term visual localiza-
tion task. With the success of CNNs, learning-based fea-
tures [14, 16, 37, 45, 51, 76, 81] are introduced to replace
handcrafted ones and have achieved excellent performance.
Encoder
Local feature network
Implicit guidance Explicit guidanceSegmentation network
Decoder
Encoder Detector
Head
Descriptor
Head
Encoder
Feature detection and description network
Feature -aware
guidanceSemantic -aware
guidanceSegmentation network
Decoder
Encoder Detector
Head
Descriptor
Head Figure 1. Overview of our framework. Our model implicitly in-
corporates semantics into the detection and description processes
with guidance of an off-the-shelf segmentation network during
the training process. Semantic- and feature-aware guidance are
adopted to enhance its ability of embedding semantic information.
With massive data for training, these methods should be
able to automatically extract keypoints from more reliable
regions ( e.g. building, traffic lane) by focusing on discrim-
inative features [64]. Nevertheless, due to the lack of ex-
plicit semantic signals for training, their ability of selecting
globally reliable keypoints is limited, as shown in Fig. 2
(detailed analysis is provided in Sec. B.1 in the supplemen-
tary material). Therefore, they prefer to extract locally re-
liable features from objects including those which are not
useful for long-term localization ( e.g. sky, tree, car), lead-
ing to limited accuracy, as demonstrated in Table 2.
Recently, advanced matchers based on sparse key-
points [8, 55, 65] or dense pixels [9, 18, 19, 33, 40, 47, 67,
74] are proposed to enhance keypoint/pixel-wise matching
and have obtained remarkable accuracy. Yet, they have
quadratic time complexity due to the attention and corre-
lation volume computation. Moreover, advanced matchers
rely on spatial connections of keypoints and perform image-
wise matching as opposed to fast point-wise matching, so
they take much longer time than nearest neighbor matching
(NN) in both mapping and localization processes because of
a large number of image pairs (much larger than the num-
ber of images) [8]. Alternatively, some works leverage se-
mantics [64, 73, 79] to filter unstable features to eliminate
wrong correspondences and report close even better accu-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5206
racy than advanced matchers [79]. However, they require
additional segmentation networks to provide semantic la-
bels at test time and are fragile to segmentation errors.
Instead, we implicitly incorporate semantics into a local
feature model, allowing it to extract robust features auto-
matically from a single network in an end-to-end fashion. In
the training process, as shown in Fig. 1, we provide explicit
semantics as supervision to guide the detection and descrip-
tion behaviors. Specifically, in the detection process, un-
like most previous methods [14, 16, 32, 37, 51] adopting ex-
haustive detection, we employ a semantic-aware detection
loss to encourage our detector to favor features from reli-
able objects ( e.g. building, traffic lane) and suppress those
from unreliable objects ( e.g. sky). In the description pro-
cess, rather than utilizing triplet loss widely used for de-
scriptor learning [16, 41], we employ a semantic-aware de-
scription loss consisting of two terms: inter- and intra-class
losses. The inter-class loss embeds semantics into descrip-
tors by enforcing features with the same label to be close
and those with different labels to be far. The intra-class
loss, which is a soft-ranking loss [23], operates on features
in each class independently and differentiates these features
from objects of the same label. Such use of soft-ranking
loss avoids the conflict with inter-class loss and retains the
diversity of features in each class ( e.g. features from build-
ings usually have larger diversity than those from traffic
lights). With semantic-aware descriptor loss, our model is
capable of producing descriptors with stronger discrimina-
tive ability. Benefiting from implicit semantic embedding,
our method avoids using additional segmentation networks
at test time and is less fragile to segmentation errors.
As the local feature network is much simpler than typi-
cal segmentation networks e.g. UperNet [10], we also adopt
an additional feature-consistency loss on the encoder to
enhance its ability of learning semantic information. To
avoid using costly to obtain ground-truth labels, we train
our model with outputs of an off-the-shelf segmentation net-
work [11, 34], which has achieved SOTA performance on
the scene parsing task [83], but other semantic segmenta-
tion networks ( e.g. [10]) can also be used.
An overview of our system is shown in Fig. 1. We em-
bed semantics implicitly into the feature detection and de-
scription network via the feature-aware and semantic-aware
guidance in the training process. At test time, our model
produces semantic-aware features from a single network di-
rectly. We summarize contributions as follows:
We propose a novel feature network which implicitly
incorporates semantics into detection and description
processes at training time, enabling the model to pro-
duce semantic-aware features end-to-end at test time.
We adopt a combination of semantic-aware and
feature-aware guidance strategy to make the model
SPP
SPPD2Net
D2NetR2D2
R2D2ASLFeat
ASLFeatsky
pedestriantree
car
SPP
SPP
sky
D2Net
D2Net
pedestrian
R2D2
R2D2
tree
ASLFeat
ASLFeat
carFigure 2. Locally reliable features . We show top 1k key-
points (reliability high !low: 1-250 , 251-500 , 501-750 ,
751-1000 ) of prior local features including SPP [14],
D2Net [16], R2D2 [51], and ASLFeat [37]. They indiscrimina-
tively give high reliability to patches with rich textures even from
objects e.g. sky, tree, pedestrian and car, which are reliable for
long-term localization (best view in color).
embed semantic information more effectively.
Our method outperforms previous local features on the
long-term localization task and gives competitive ac-
curacy to advanced matchers but has higher efficiency.
Experiments show our method achieves a better trade-
off between accuracy and efficiency than advanced match-
ers [8, 55, 65] especially on devices with limited computing
resources. We organize the rest of this paper as follows. In
Sec. 2, we introduce related works. In Sec. 3, we describe
our method in detail. We discuss experiments and limita-
tions in Sec. 4 and Sec. 5 and conclude the paper in Sec. 6.
|
Wu_Bidirectional_Cross-Modal_Knowledge_Exploration_for_Video_Recognition_With_Pre-Trained_Vision-Language_CVPR_2023 | Abstract
Vision-language models (VLMs) pre-trained on large-
scale image-text pairs have demonstrated impressive trans-
ferability on various visual tasks. Transferring knowledge
from such powerful VLMs is a promising direction for build-
ing effective video recognition models. However, current
exploration in this field is still limited. We believe that the
greatest value of pre-trained VLMs lies in building a bridge
between visual and textual domains. In this paper, we pro-
pose a novel framework called BIKE , which utilizes the
cross-modal bridge to explore bidirectional knowledge: i)
We introduce the Video Attribute Association mechanism,
which leverages the Video-to-Text knowledge to gen-
erate textual auxiliary attributes for complementing video
recognition. ii) We also present a Temporal Concept Spot-
ting mechanism that uses the Text-to-Video expertise
to capture temporal saliency in a parameter-free manner,
leading to enhanced video representation. Extensive stud-
ies on six popular video datasets, including Kinetics-400 &
600, UCF-101, HMDB-51, ActivityNet and Charades, show
that our method achieves state-of-the-art performance in
various recognition scenarios, such as general, zero-shot,
and few-shot video recognition. Our best model achieves
a state-of-the-art accuracy of 88.6% on the challenging
Kinetics-400 using the released CLIP model. The code is
available at https://github.com/whwu95/BIKE .
| 1. Introduction
In recent years, the remarkable success of large-
scale pre-training in NLP ( e.g., BERT [9], GPT [4, 38],
ERNIE [69] and T5 [39]) has inspired the computer vi-
sion community. Vision-language models (VLMs) lever-
age large-scale noisy image-text pairs with weak correspon-
dence for contrastive learning ( e.g., CLIP [37], ALIGN
[19], CoCa [65], Florence [66]), and demonstrate impres-
sive transferability across a wide range of visual tasks.
VideoVideoVisualEncoderFCVideoScoreScore(a)Traditionalvideorecognition.(b)Category embedding as classifier forvideorecognition.VisualEncoderCategoryTextualEncoderVisualEncoderCategoryTextualEncoderVideoConcept Spotting(c)Bidirectional knowledge exploration forvideorecognition.ScoreCategory emb.VideoSaliencyAttrib.FeatureScoreVideoAttributeAssociationFigure 1. Illustration of the difference between our paradigm (c)
with existing unimodality paradigm (a) and cross-modal paradigm
(b). Please zoom in for the best view.
Naturally, transferring knowledge from such powerful
pre-trained VLMs is emerging as a promising paradigm for
building video recognition models. Currently, exploration
in this field can be divided into two lines. As depicted in
Figure 1(a), one approach [27,35,64] follows the traditional
unimodal video recognition paradigm, initializing the video
encoder with the pre-trained visual encoder of VLM. Con-
versely, the other approach [21,34,48,58] directly transfers
the entire VLM into a video-text learning framework that
utilizes natural language ( i.e., class names) as supervision,
as shown in Figure 1(b). This leads to an question: have we
fully utilized the knowledge of VLMs for video recognition?
In our opinion, the answer is No. The greatest charm of
VLMs is their ability to build a bridge between the visual
and textual domains. Despite this, previous research em-
ploying pre-aligned vision-text features of VLMs for video
recognition has only utilized unidirectional video-to-text
matching. In this paper, we aim to facilitate bidirectional
knowledge exploration through the cross-modal bridge for
enhanced video recognition. With this in mind, we mine
Video-to-Text andText-to-Video knowledge by
1) generating textual information from the input video and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6620
2) utilizing category descriptions to extract valuable video-
related signals.
In the first Video-to-Text direction, a common
practice for mining VLM knowledge is to embed the in-
put video and category description into a pre-aligned fea-
ture space, and then select the category that is closest to
the video, as illustrated in Figure 1(b), which serves as our
baseline. One further question naturally arises: Can we
incorporate auxiliary textual information for video recog-
nition? To address this question, |
Xiao_3D_Semantic_Segmentation_in_the_Wild_Learning_Generalized_Models_for_CVPR_2023 | Abstract
Robust point cloud parsing under all-weather condi-
tions is crucial to level-5 autonomy in autonomous driving.
However, how to learn a universal 3D semantic segmen-
tation (3DSS) model is largely neglected as most existing
benchmarks are dominated by point clouds captured under
normal weather. We introduce SemanticSTF , an adverse-
weather point cloud dataset that provides dense point-level
annotations and allows to study 3DSS under various ad-
verse weather conditions. We study all-weather 3DSS mod-
eling under two setups: 1) domain adaptive 3DSS that
adapts from normal-weather data to adverse-weather data;
2) domain generalizable 3DSS that learns all-weather 3DSS
models from normal-weather data. Our studies reveal the
challenge while existing 3DSS methods encounter adverse-
weather data, showing the great value of SemanticSTF in
steering the future endeavor along this very meaningful re-
search direction. In addition, we design a domain random-
ization technique that alternatively randomizes the geome-
try styles of point clouds and aggregates their embeddings,
ultimately leading to a generalizable model that can im-
prove 3DSS under various adverse weather effectively. The
SemanticSTF and related codes are available at https:
//github.com/xiaoaoran/SemanticSTF .
†Corresponding author | 1. Introduction
3D LiDAR point clouds play an essential role in se-
mantic scene understanding in various applications such as
self-driving vehicles and autonomous drones. With the re-
cent advance of LiDAR sensors, several LiDAR point cloud
datasets [2, 11, 49] such as SemanticKITTI [2] have been
proposed which greatly advanced the research in 3D se-
mantic segmentation (3DSS) [19, 41, 62] for the task of
point cloud parsing. As of today, most existing point cloud
datasets for outdoor scenes are dominated by point clouds
captured under normal weather. However, 3D vision ap-
plications such as autonomous driving require reliable 3D
perception under all-weather conditions including various
adverse weather such as fog, snow, and rain. How to learn
a weather-tolerant 3DSS model is largely neglected due to
the absence of related benchmark datasets.
Although several studies [3, 33] attempt to include ad-
verse weather conditions in point cloud datasets, such as the
STF dataset [3] that consists of LiDAR point clouds cap-
tured under various adverse weather, these efforts focus on
object detection benchmarks and do not provide any point-
wise annotations which are critical in various tasks such as
3D semantic and instance segmentation. To address this
gap, we introduce SemanticSTF , an adverse-weather point
cloud dataset that extends the STF Detection Benchmark by
providing point-wise annotations of 21 semantic categories,
as illustrated in Fig. 1. Similar to STF, SemanticSTF cap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9382
tures four typical adverse weather conditions that are fre-
quently encountered in autonomous driving including dense
fog, light fog, snow, and rain.
SemanticSTF provides a great benchmark for the study
of 3DSS and robust point cloud parsing under adverse
weather conditions. Beyond serving as a well-suited test
bed for examining existing fully-supervised 3DSS meth-
ods that handle adverse-weather point cloud data, Semantic-
STF can be further exploited to study two valuable weather-
tolerant 3DSS scenarios: 1) domain adaptive 3DSS that
adapts from normal-weather data to adverse-weather data,
and 2) domain generalizable 3DSS that learns all-weather
3DSS models from normal-weather data. Our studies reveal
the challenges faced by existing 3DSS methods while pro-
cessing adverse-weather point cloud data, highlighting the
significant value of SemanticSTF in guiding future research
efforts along this meaningful research direction.
In addition, we design PointDR, a new baseline frame-
work for the future study and benchmarking of all-weather
3DSS. Our objective is to learn robust 3D representations
that can reliably represent points of the same category
across different weather conditions while remaining dis-
criminative across categories. However, robust all-weather
3DSS poses two major challenges: 1) LiDAR point clouds
are typically sparse, incomplete, and subject to substantial
geometric variations and semantic ambiguity. These chal-
lenges are further exacerbated under adverse weather con-
ditions, with many missing points and geometric distortions
due to fog, snow cover, etc. 2) More noises are introduced
under adverse weather due to snow flicks, rain droplets, etc.
PointDR addresses the challenges with two iterative oper-
ations: 1) Geometry style randomization that expands the
geometry distribution of point clouds under various spatial
augmentations; 2) Embedding aggregation that introduces
contrastive learning to aggregate the encoded embeddings
of the randomly augmented point clouds. Despite its sim-
plicity, extensive experiments over point clouds of different
adverse weather conditions show that PointDR achieves su-
perior 3DSS generalization performance.
The contribution of this work can be summarized in three
major aspects. First , we introduce SemanticSTF, a large-
scale adverse-weather point cloud benchmark that provides
high-quality point-wise annotations of 21 semantic cate-
gories. Second , we design PointDR, a point cloud do-
main randomization baseline that can be exploited for future
study and benchmarking of 3DSS under all-weather condi-
tions. Third , leveraging SemanticSTF, we benchmark exist-
ing 3DSS methods over two challenging tasks on domain
adaptive 3DSS and domain generalized 3DSS. The bench-
marking efforts lay a solid foundation for future research on
this highly meaningful problem. |
Wu_Attention-Based_Point_Cloud_Edge_Sampling_CVPR_2023 | Abstract
Point cloud sampling is a less explored research topic
for this data representation. The most commonly used sam-
pling methods are still classical random sampling and far-
thest point sampling. With the development of neural net-
works, various methods have been proposed to sample point
clouds in a task-based learning manner. However, these
methods are mostly generative-based, rather than selecting
points directly using mathematical statistics. Inspired by
the Canny edge detection algorithm for images and with
the help of the attention mechanism, this paper proposes a
non-generative Attention-based Point cloud Edge Sampling
method (APES), which captures salient points in the point
cloud outline. Both qualitative and quantitative experimen-
tal results show the superior performance of our sampling
method on common benchmark tasks.
| 1. Introduction
Point clouds are a widely used data representation in var-
ious domains including autonomous driving, augmented re-
ality, and robotics. Due to the typically large amount of
data, the sampling of a representative subset of points is a
fundamental and important task in 3D computer vision.
Apart from random sampling (RS), other classical point
sampling methods including grid sampling, uniform sam-
pling, and geometric sampling have been well-established.
Grid sampling samples points with regular grids and thus
cannot control the number of sampled points exactly. Uni-
form sampling takes the points in the point cloud evenly
and is more popular due to its robustness. Farthest point
sampling (FPS) [9, 29] is the most famous of them and has
been widely used in many current methods when downsam-
pling operations are required [19, 34, 47, 52, 56]. Geomet-
ric sampling samples points based on local geometry, such
as the curvature of the underlying shape. Another example
of Inverse Density Importance Sampling (IDIS) [11] sam-
ples points whose distance sum values with neighbors are
smaller. But this method requires the point cloud to have a
high density throughout, and it performs even worse when
the raw point cloud has an uneven distribution.
Figure 1. Similar to the Canny edge detection algorithm that de-
tects edge pixels in images, our proposed APES algorithm samples
edge points which indicate the outline of the input point clouds.
The blue grids/spheres represent the local patches for given center
pixels/points.
In addition to the above mathematical statistics-based
methods, with the development of deep learning techniques,
several neural network-based methods have been proposed
for task-oriented sampling, including S-Net [8], SampleNet
[16], DA-Net [21], etc. They use simple multi-layer per-
ceptrons (MLPs) to generate new point cloud sets of desired
sizes as resampled results, supplemented by different post-
processing operations. MOPS-Net [36] learns a sampling
transformation matrix first, and then generates the sampled
point cloud by multiplying it with the original point cloud.
However, all these methods are generative-based, rather
than selecting points directly. On the other hand, there is an
increasing body of work designing neural network-based lo-
cal feature aggregation operators for point clouds. Although
some of them (e.g., PointCNN [19], PointASNL [52], GSS
[53]) decrease the point number while learning latent fea-
tures, they can hardly be considered as sampling methods
in the true sense as no real spatial points exist during the
processing. Moreover, none of the above methods consider
shape outlines as special features.
In this paper, we propose a point cloud edge sampling
method that combines neural network-based learning and
mathematical statistics-based direct point selecting. One
key to the success of 2D image processing with neural net-
works is that they can detect primary edges and use them
to form shape contours implicitly in the latent space [55].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5333
Inspired by that insight, we pursue the idea of focusing
on salient outline points (edge points) for the sampling of
point cloud subsets for downstream tasks. Broadly speak-
ing, edge detection may be considered a special sampling
strategy. Hence, by revisiting the Canny edge detection al-
gorithm [4] which is a widely-recognized classical edge de-
tection method for images, we propose our attention-based
point cloud edge sampling (APES) method for point clouds.
It uses the attention mechanism [42] to compute correla-
tion maps and sample edge points whose properties are re-
flected in these correlation maps. We propose two kinds
of APES with two different attention modes. Based on
neighbor-to-point (N2P) attention which computes correla-
tion maps between each point and its neighbors, local-based
APES is proposed. Based on point-to-point (P2P) atten-
tion which computes a correlation map between all points,
global-based APES is proposed. Our proposed method se-
lects sampled points directly, and the intermediate result
preserves the point index meaning so they can be visualized
easily. Moreover, our method can downsample the input
point cloud to any desired size.
We summarize our contributions as follows:
• A point cloud edge sampling method termed APES that
combines neural network-based learning and mathemat-
ical statistics-based direct point selecting.
• Two variants of local-based APES and global-based
APES, by using two different attention modes.
• Good qualitative and quantitative results on common
point cloud benchmarks, demonstrating the effectiveness
of the proposed sampling method.
|
Wang_ProphNet_Efficient_Agent-Centric_Motion_Forecasting_With_Anchor-Informed_Proposals_CVPR_2023 | Abstract
Motion forecasting is a key module in an autonomous
driving system. Due to the heterogeneous nature of multi-
sourced input, multimodality in agent behavior, and low la-
tency required by onboard deployment, this task is notori-
ously challenging. To cope with these difficulties, this paper
proposes a novel agent-centric model with anchor-informed
proposals for efficient multimodal motion prediction. We
design a modality-agnostic strategy to concisely encode the
complex input in a unified manner. We generate diverse
proposals, fused with anchors bearing goal-oriented scene
context, to induce multimodal prediction that covers a wide
range of future trajectories. Our network architecture is
highly uniform and succinct, leading to an efficient model
amenable for real-world driving deployment. Experiments
reveal that our agent-centric network compares favorably
with the state-of-the-art methods in prediction accuracy,
while achieving scene-centric level inference latency.
| 1. Introduction
Predicting the future behaviors of various road partici-
pants is an essential task for autonomous driving systems
to be able to safely and comfortably operate in dynamic
driving environments. However, motion forecasting is ex-
tremely challenging in the sense that (i) the input consists of
interrelated modalities from multiple sources; (ii) the output
is inherently stochastic and multimodal; and (iii) the whole
prediction pipeline must fulfill tight run time requirements
with a limited computation budget.
A motion forecasting model typically collects the com-
prehensive information from perception signals and high-
definition (HD) maps, such as traffic light states, motion
history of agents, and the road graph [11, 13, 14, 23]. Such
a collection of information is a heterogeneous mix of static
and dynamic, as well as discrete and continuous elements.
Moreover, there exist complex semantic relations between
these components, including agent-to-agent, agent-to-road,
*Corresponding author [email protected]
Figure 1. Overview of accuracy-and–latency trade-off for the task
of motion forecasting on Argoverse-1. ProphNet outperforms the
state-of-the-art methods in prediction accuracy and considerably
speeds up the agent-centric inference latency, leading to the best
balance between accuracy and latency.
and road-to-road interactions. Previous methods in the field
usually model this diverse set of input via an equally intri-
cate system with modality-specific designs. LaneGCN [9]
adopts four sub-networks to separately process the inter-
actions of agent-to-agent, agent-to-road, road-to-agent and
road-to-road. MultiPath++ is developed in [21] to employ
multi-context gating to first capture the interactions between
a target agent and other agents, and then fuse them with the
map in a hierarchical manner.
Due to the unknown intent of an agent, motion predic-
tion output is highly multimodal by nature. It is often im-
possible to assert with full certainty whether a vehicle will
go straight or turn right as it approaches an intersection.
The motion prediction model is therefore required to accu-
rately model the underlying probability distribution of fu-
ture behaviors. Recently, there have been various efforts
on improving output multimodality. TNT [30] introduces
three stages including goal (endpoint of a predicted trajec-
tory) prediction, goal-conditioned motion estimation, and
trajectory scoring. DenseTNT is proposed in [7] to improve
upon the former by predicting from dense goal candidates
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21995
and conducting an iterative offline optimization algorithm to
generate multi-future pseudo labels. A region based train-
ing strategy is developed in mmTransformer [12] to manu-
ally partition a scene into subregions to enforce predictions
to fall into different regions to promote multimodality.
In a practical autonomous driving system, the whole pre-
diction pipeline typically operates at a frequency of 10Hz,
including (i) feature extraction from upstream modules, (ii)
network inference, and (iii) post-processing. Therefore, a
motion forecasting network is required to meet the strict
inference latency constraint to be useful in the real-world
setting. While the recent agent-centric paradigm [15, 21] is
trending and has made remarkable progress on improving
accuracy, its latency is dramatically increased compared to
its scene-centric counterpart [5, 12], as shown in Figure 1.
This is because agent-centric models compute scene repre-
sentations with respect to each agent separately, meaning
the amount of features to process is dozens of times larger
than that with scene-centric models. This substantially in-
creases the latency of the whole prediction pipeline and is
computationally prohibitive for onboard deployment.
In light of these observations, we present ProphNet : an
efficient agent-centric motion forecasting model that fuses
heterogeneous input in a unified manner and enhances mul-
timodal output via a design of anchor-informed proposals.
In contrast to the existing complex modality-specific pro-
cessing [9, 21], we develop a modality-agnostic architec-
ture that combines the features of agent history, agent rela-
tion and road graph as the agent-centric scene representa-
tion (AcSR), which directly feeds to a unified self-attention
encoder [22]. In this way, we motivate the self-attention
network to learn the complex interactions with minimum in-
ductive bias. Additionally, we propose the anchor-informed
proposals (AiP) in an end-to-end learning fashion to induce
output multimodality. In our approach, proposals are the
future trajectory embeddings conjectured exclusively from
agent history, and anchors refer to the goal embeddings
learned from AcSR. In such a manner, the network learns
to first generate proposals to maximize diversity without
environmental restrictions, and then select and refine after
absorbing anchors that carry rich goal-oriented contextual
information. Based on random combinations of AiP, we
further introduce the hydra prediction heads to encourage
the network to learn complementary information and mean-
while perform ensembling readily.
As illustrated in Figure 2, this design formulates a suc-
cinct network with high efficiency in terms of both archi-
tecture and inference. Instead of employing a self-attention
encoder for each input modality [9,12], the self-attention on
AcSR unifies the learning of intra- and inter-modality inter-
actions in a single compact space, which allows the network
to assign associations within and across input modalities
with maximum flexibility. Our model also considerably re-duces inference latency and performs prediction with a peak
latency as low as 28ms. In addition, rather than heuristically
predefining or selecting goals to strengthen output multi-
modality [1, 7, 30], AiP is learned end-to-end to infuse goal
based anchors to proposals that are deliberately produced
with diverse multimodality. In the end, together with the
hydra prediction heads, our network is capable of generat-
ing future trajectories with rich variety.
Our main contributions are summarized as follows. First,
we develop an input-source-agnostic strategy based on
AcSR to model heterogeneous input and simplify network
architecture, making the model amenable for real-world
driving deployment. Second, we propose a novel frame-
work that couples proposal and anchor learning end-to-end
through AiP to promote output multimodality. Third, we in-
troduce hydra prediction heads to learn complementary pre-
diction and explore ensembling. Fourth, we show that our
network, as an agent-centric model, achieves state-of-the-
art accuracy on the popular benchmarks while maintaining
scene-centric level low inference latency.
|
Xu_Learning_Multi-Modal_Class-Specific_Tokens_for_Weakly_Supervised_Dense_Object_Localization_CVPR_2023 | Abstract
Weakly supervised dense object localization (WSDOL)
relies generally on Class Activation Mapping (CAM), which
exploits the correlation between the class weights of the im-
age classifier and the pixel-level features. Due to the lim-
ited ability to address intra-class variations, the image clas-
sifier cannot properly associate the pixel features, leading
to inaccurate dense localization maps. In this paper, we
propose to explicitly construct multi-modal class represen-
tations by leveraging the Contrastive Language-Image Pre-
training (CLIP), to guide dense localization. More specifi-
cally, we propose a unified transformer framework to learn
two-modalities of class-specific tokens, i.e., class-specific
visual and textual tokens. The former captures semantics
from the target visual data while the latter exploits the class-
related language priors from CLIP , providing complemen-
tary information to better perceive the intra-class diversi-
ties. In addition, we propose to enrich the multi-modal
class-specific tokens with sample-specific contexts compris-
ing visual context and image-language context. This en-
ables more adaptive class representation learning, which
further facilitates dense localization. Extensive experiments
show the superiority of the proposed method for WSDOL on
two multi-label datasets, i.e., PASCAL VOC and MS COCO,
and one single-label dataset, i.e., OpenImages. Our dense
localization maps also lead to the state-of-the-art weakly
supervised semantic segmentation (WSSS) results on PAS-
CAL VOC and MS COCO.1
| 1. Introduction
Fully supervised dense prediction tasks have achieved
great success, which however comes at the cost of expensive
pixel-level annotations. To address this issue, recent works
have investigated the use of weak labels, such as image-
1https://github.com/xulianuwa/MMCST
CNN RU ViTC
ViTC+
+(a) CAM: image claVVifieU dUiYen
(b) OXUV: MXlWi-mRdal claVV-VSecific WRkenV dUiYen ClaVVifieU ZeighWVCCRUUelaWiRn+FXViRnClaVV WRkenTe[W WRkenImage feaWXUeV
DenVe RbjecW lRcali]aWiRn maSCLIP label We[W embeddingV
Figure 1. (a) CAM exploits the correlation between the image
classifier and the pixel features. (b) We propose to construct multi-
modal class-specific tokens to guide dense object localization.
level labels, to generate dense object localization maps as
pseudo labels for those tasks. For the weakly supervised ob-
ject localization (WSOL) task, most methods evaluate local-
ization results on the bounding-box level and a few recent
methods [5] evaluate on the pixel level. We use WSDOL
to focus on the pixel-level evaluation, which is critical for
downstream dense prediction tasks such as WSSS.
Previous works have exploited Convolutional Neural
Networks (CNNs) and Vision Transformers (ViTs) [7] for
WSDOL with image-level labels [21, 40]. These meth-
ods have generally relied on Class Activation Mapping
(CAM) [48], which generates class-specific localization
maps by computing the correlation between the class-
specific weight vectors of the image classifier and every
pixel feature vector. However, image classifiers generally
have a limited ability to address the intra-class variation, let
alone at the pixel level. This thus leads to inaccurate dense
localization results. In the conventional fully supervised
learning paradigm, the image classification model aims to
convert images to numeric labels, ignoring the context of
the labels. Hence, it tends to learn the pattern that max-
imizes the inter-class differences but disregards the intra-
class diversities. This largely restricts the model’s ability of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19596
understanding semantic objects.
Recently, Vision-Language (VL) models have attracted
much attention. In particular, CLIP, a representative VL
model, pre-trained on 400 million image-text pairs that are
readily available publicly, has been successfully applied to
a number of downstream tasks, due to its strong general-
ization ability. CLIP introduces a contrastive representation
learning method that constrains an image to match its re-
lated text while dis-matching the remaining texts from the
same batch, in a multi-modal embedding space. This en-
ables the model to perceive the differences across images,
thus facilitating it to better discriminate intra-class samples.
Motivated by these observations, we propose to lever-
age the strong representations of visual concepts encoded
by the pre-trained CLIP language model to guide the dense
object localization. More specifically, we extract the class-
related text embeddings by feeding the label prompts to the
pre-trained CLIP language model. As shown in Figure 1,
we propose a unified transformer framework which includes
multi-modal class-specific tokens, i.e.,class-specific visual
tokens and class-specific textual tokens. The class-specific
visual tokens aim to capture visual representations from the
target image dataset, while the class-specific textual tokens
take the rich language semantics from the CLIP label text
embeddings. These two modalities of class-specific tokens,
with complementary information, are jointly used to corre-
late pixel features, contributing to better dense localization.
In order to construct more adaptive class representations,
which can better associate the sample-specific local features
for dense localization, we propose to enhance the global
multi-modal class-specific tokens with sample-specific con-
textual information. To this end, we introduce two designs:
(i) at the feature level, we use the sample-specific visual
context to enhance both the class-specific visual and tex-
tual tokens. This is achieved by combining these global to-
kens with their output local counterparts which aggregate
the patch tokens of the image through the self-attention lay-
ers; ( ii) at the loss level, we introduce a regularization con-
trastive loss to encourage the output text tokens to match the
CLIP image embeddings. This allows the CLIP model to be
better adapted to our target datasets. Moreover, due to its
image-language matching pre-training objective, the CLIP
image encoder is learned to extract the image embeddings
that match the CLIP text embeddings of their corresponding
image captions. We thus argue that through this contrastive
loss, the rich image-related language context from the CLIP
could be implicitly transferred to the text tokens, which are
more beneficial for guiding the dense object localization,
compared to the simple label prompts.
In summary, the contribution of this work is three-fold:
• We propose a new WSDOL method by explicitly con-
structing multi-modal class representations in a unified
transformer framework.• The proposed transformer includes class-specific vi-
sual tokens and class-specific textual tokens, which are
learned from different data modalities with diverse super-
visions, thus providing complementary information for
more discriminative dense localization.
• We propose to enhance the multi-modal global class rep-
resentations by using sample-specific visual context via
the global-local token fusion and transferring the image-
language context from the pre-trained CLIP via a regular-
ization loss. This enables more adaptive class representa-
tions for more accurate dense localization.
The proposed method achieved the state-of-the-art re-
sults on PASCAL VOC 2012 (72.2% on the test set) and
MS COCO 2014 (45.9% on the validation set) for WSSS.
|
Xie_OmniVidar_Omnidirectional_Depth_Estimation_From_Multi-Fisheye_Images_CVPR_2023 | Abstract
Estimating depth from four large field of view (FoV) cam-
eras has been a difficult and understudied problem. In this
paper, we proposed a novel and simple system that can con-
vert this difficult problem into easier binocular depth esti-
mation. We name this system OmniVidar, as its results are
similar to LiDAR, but rely only on vision. OmniVidar con-
tains three components: (1) a new camera model to address
the shortcomings of existing models, (2) a new multi-fisheye
camera based epipolar rectification method for solving the
image distortion and simplifying the depth estimation prob-
lem, (3) an improved binocular depth estimation network,
which achieves a better balance between accuracy and effi-
ciency. Unlike other omnidirectional stereo vision methods,
OmniVidar does not contain any 3D convolution, so it can
achieve higher resolution depth estimation at fast speed.
Results demonstrate that OmniVidar outperforms all other
methods in terms of accuracy and performance.
| 1. Introduction
Depth estimation from images is an important research
field in computer vision, as it enables the acquisition of
depth information with low-cost cameras for a wide range
of applications. Traditional stereo cameras with pinhole
lenses are limited in their FoV . However, many scenarios
require an omnidirectional depth map, such as autonomous
driving [42] and robot navigation [13, 43]. Although there
are active sensors available that can provide omnidirectional
depth information, such as LiDAR [25], their high cost
makes them less accessible than stereo vision. Passive sen-
sors, such as RGB cameras, are a common choice for depth
estimation due to their low cost, lightweight, and low power
consumption. To increase the FoV , fisheye lenses are often
introduced into stereo vision setups.
Over the past few decades, various methods have been
proposed for depth estimation using fisheye cameras. These
*Corresponding author.
Figure 1. Our prototype that built with four 250fisheye cameras
and its results of dense inverse distance map and cloud points. It
can get great depth estimation results in real scenes and achieve
real-time performance on modern GPU.
include the binocular fisheye system [31], up-down fisheye
system [13], and catadioptric cameras [17, 23, 32]. How-
ever, all of these approaches have limitations. The binoc-
ular fisheye system cannot provide an omnidirectional per-
ception. The up-down fisheye system and catadioptric cam-
eras can offer horizontal 360depth perception, but their
vertical FoV is limited. Furthermore, catadioptric cameras
tend to be bulky, challenging to calibrate, and prone to er-
rors. It turns out that the best choice for omnidirectional
depth estimation is a system consisting of four cameras
with extremely wide FoV ( >180). This system enables
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21529
360depth estimation both horizontally and vertically, and
is light-weight, convenient to calibrate and maintain. Sev-
eral studies [21,29,39,41] have shown excellent results with
this approach. However, these methods often use many 3D
convolutions, resulting in low efficiency. To adapt to lim-
ited memory, the images must be down-sampled, leading to
a lower resolution of the output depth map. Furthermore,
many of these approaches perform depth estimation using
the original fisheye image without explicitly processing im-
age distortion, which leads to the construction of a tedious
and complicated cost volume and can result in reduced ac-
curacy.
Moreover, due to the complex imaging process of large
FoV fisheye lenses, handling fisheye image distortion using
mathematical formulas can be a challenging task. While
several excellent large FoV fisheye camera models have
been proposed [2, 18, 19, 27, 30, 36], our experiments have
shown that none of these models can accurately approxi-
mate the imaging process, and we get less satisfactory depth
estimation results when using them.
Inspired by the above observations, we propose Om-
niVidar, a novel and simple multi-fisheye omnidirectional
depth estimation system, as shown in Figure 2. OmniVi-
dar contains three components. Firstly, we improve the
DSCM [36] and propose a new camera model, named Triple
Sphere Camera Model (TSCM), which can better approx-
imate the imaging process and achieve the best accuracy.
Then we propose an epipolar rectification algorithm de-
signed for multi-fisheye camera system. We leverage a
cubic-like projection approach to transform the four fish-
eye camera systems into four binocular camera systems and
then conduct epipolar rectification on each binocular sys-
tem. This method solves the distortion issue and reduces
the complex multi-fisheye omnidirectional depth estimation
problem to a much simpler binocular depth estimation prob-
lem. Finally, we design a lightweight binocular depth esti-
mation network based on RAFT [35]. We add Transformer
encoder [37] into feature extraction in RAFT to combine
the advantages of Transformer and GRU, and it’s easy to
balance the accuracy and efficiency.
We compare OmniVidar with existing methods on sev-
eral datasets. The results demonstrate that our method out-
performs all the others in terms of speed and accuracy,
achieving State-of-The-Art performance.
|
Wu_NeFII_Inverse_Rendering_for_Reflectance_Decomposition_With_Near-Field_Indirect_Illumination_CVPR_2023 | Abstract
Inverse rendering methods aim to estimate geometry,
materials and illumination from multi-view RGB images. In
order to achieve better decomposition, recent approaches
attempt to model indirect illuminations reflected from dif-
ferent materials via Spherical Gaussians (SG), which, how-
ever, tends to blur the high-frequency reflection details. In
this paper, we propose an end-to-end inverse rendering
pipeline that decomposes materials and illumination from
multi-view images, while considering near-field indirect il-
lumination. In a nutshell, we introduce the Monte Carlo
sampling based path tracing and cache the indirect illumi-
nation as neural radiance, enabling a physics-faithful and
easy-to-optimize inverse rendering method. To enhance ef-
ficiency and practicality, we leverage SG to represent the
smooth environment illuminations and apply importance
sampling techniques. To supervise indirect illuminations
from unobserved directions, we develop a novel radiance
consistency constraint between implicit neural radiance
and path tracing results of unobserved rays along with the
joint optimization of materials and illuminations, thus sig-
nificantly improving the decomposition performance. Ex-
tensive experiments demonstrate that our method outper-
forms the state-of-the-art on multiple synthetic and real
datasets, especially in terms of inter-reflection decomposi-
tion.
| 1. Introduction
Inverse rendering, i.e., recovering geometry, material
and lighting from images, is a long-standing problem in
computer vision and graphics. It is important for digitiz-
ing our real world and acquiring high quality 3D contents
in many applications such as VR, AR and computer games.
Recent methods [7, 41, 44, 45] represent geometry and
materials as neural implicit fields, and recover them in an
*Corresponding author.
direct
illumination
SGsindirect
illuminationdirect
illumination
indirect
illuminationPath Tracing
Rendering
GradientsSG
Rendering
Materials MaterialsMonte CarloInvrender Ours
Render Result Specular Reflection Render Result Specular ReflectionGradients
Diffuse Albedo Roughness
Diffuse Albedo Roughness
Tracing Path
Figure 1. Our method integrates lights through path tracing with
Monte Carlo sampling, while Invrender [45] uses Spherical Gaus-
sians to approximate the overall illumination. In this way, our
method simultaneously optimizes indirect illuminations and ma-
terials, and achieves better decomposition of inter-reflections.
analysis-by-synthesis manner. However, how to decompose
the indirect illumination from materials is still challenging.
Most methods [6, 7, 25, 41, 44] model the environment il-
luminations but ignore indirect illuminations. As a result,
the inter-reflections and shadows between objects are mis-
takenly treated as materials. Invrender [45] takes the in-
direct illumination into consideration, and approximates it
with Spherical Gaussian (SG) for computation efficiency.
Since SG approximation cannot model the high frequency
details, the recovered inter-reflections tend to be blurry and
contain artifacts. Besides, indirect illuminations estimated
by an SG network cannot be jointly optimized with materi-
als and environment illuminations.
In this paper, we propose an end-to-end inverse render-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4295
ing pipeline that decomposes materials and illumination,
while considering near-field indirect illumination. In con-
trast to the method [45], we represent the materials and the
indirect illuminations as neural implicit fields, and jointly
optimize them with the environment illuminations. Fur-
thermore, we introduce a Monte Carlo sampling based path
tracing to model the inter-reflections while leveraging SG to
represent the smooth environment illuminations. In the for-
ward rendering, incoming rays are sampled and integrated
by Monte Carlo estimator instead of being approximated by
a pretrained SG approximator, as shown in Fig. 1. To depict
the radiance, the bounced secondary rays are further traced
once and computed based on the cached neural indirect illu-
mination. During the joint optimization, the gradients could
be directly propagated to revise the indirect illuminations.
In this way, high frequency details of the inter-reflection can
be preserved.
Specifically, to make our proposed framework work, we
need to address two critical techniques:
(i) The Monte Carlo estimator is computationally expen-
sive due to the significant number of rays required for sam-
pling. To overcome this, we use importance sampling to
improve integral estimation efficiency. We also find that SG
is a better representation of environment illuminations and
adapt the corresponding importance sampling techniques to
enhance efficiency and practicality.
(ii) Neural implicit fields often suffer generalization
problems when the view directions deviate from the training
views, which is the common case of indirect illumination.
This would lead to erroneous decomposition between mate-
rials and illuminations. It is hard to determine whether radi-
ance comes from material albedos or indirect illuminations
as the indirect illuminations from unobserved directions are
unconstrained or could have any radiance. To learn indi-
rect illuminations from unobserved directions, we introduce
a radiance consistency constraint that enforces the implicit
neural radiance produced by the neural implicit fields and
path tracing results of unobserved directions. In this fash-
ion, the ambiguity between materials and indirect illumina-
tions has been significantly mitigated. Moreover, they can
be jointly optimized with environment illuminations, lead-
ing to better decomposition performance.
We evaluate our method on synthetic and real data. Ex-
periments show that our approach achieves better perfor-
mance than others. Our method can render sharp inter-
reflection and recover accurate roughness as well as diffuse
albedo. Our contributions are summarized as follows:
• We propose an end-to-end inverse rendering pipeline
that decomposes materials and illumination, while
considering near-field indirect illumination.
• We introduce the Monte Carlo sampling based path
tracing and cache the indirect illumination as neural
radiance, resulting in a physics-faithful and easy-to-optimize inverse rendering process.
• We employ SG to parameterize smooth environment il-
lumination and apply importance sampling techniques
to enhance efficiency and practicality of the pipeline.
• We introduce a new radiance consistency in learning
indirect illuminations, which can significantly alleviate
the decomposition ambiguity between materials and
indirect illuminations.
|
Wang_MHPL_Minimum_Happy_Points_Learning_for_Active_Source_Free_Domain_CVPR_2023 | Abstract
Source free domain adaptation (SFDA) aims to transfer
a trained source model to the unlabeled target domain with-
out accessing the source data. However, the SFDA setting
faces a performance bottleneck due to the absence of source
data and target supervised information, as evidenced by
the limited performance gains of the newest SFDA meth-
ods. Active source free domain adaptation (ASFDA) can
break through the problem by exploring and exploiting a
small set of informative samples via active learning. In
this paper, we first find that those satisfying the proper-
ties of neighbor-chaotic, individual-different, and source-
dissimilar are the best points to select. We define them as the
minimum happy (MH) points challenging to explore with ex-
isting methods. We propose minimum happy points learning
(MHPL) to explore and exploit MH points actively. We de-
sign three unique strategies: neighbor environment uncer-
tainty, neighbor diversity relaxation, and one-shot query-
ing, to explore the MH points. Further, to fully exploit MH
points in the learning process, we design a neighbor focal
loss that assigns the weighted neighbor purity to the cross
entropy loss of MH points to make the model focus more on
them. Extensive experiments verify that MHPL remarkably
exceeds the various types of baselines and achieves signifi-
cant performance gains at a small cost of labeling.
| 1. Introduction
Transferring a trained source model instead of the source
data to the unlabeled target domain, source-free domain
adaptation (SFDA) has drawn much attention recently.
Since it prevents the external leakage of source data, SFDA
meets privacy persevering [19, 47], data security [43], and
data silos [39]. Moreover, it has important potential in
many applications, e.g., object detection [23], object recog-
nition [25], and semantic segmentation [17]. However, the
SFDA setting faces a performance bottleneck due to the ab-
*denotes corresponding author.sence of source data and target supervised information. The
state-of-the-art A2Net [50] is a very powerful method that
seeks a classifier and exploits classifier design to achieve ad-
versarial domain-level alignment and contrastive category-
level matching, but it only improved the mean accuracy of
the pioneering work (SHOT [25]) from 71.8% to 72.8% on
the challenging Office-Home dataset [45]. Although some
recent studies [20, 52] utilize the transformer or mix-up to
improve the performance further, they have modified the
structure of the source model or changed the source data,
which is not universal in privacy-preserving scenarios.
Active source free domain adaptation (ASFDA) can pro-
duce remarkable performance gains and breakthrough per-
formance bottlenecks when a small set of informative tar-
get samples labeled by experts. Two factors must be con-
sidered to achieve significant performance gains: (1) Ex-
ploring samples that, once labeled, will improve accuracy
significantly; (2) Exploiting limited active labeled target
data well in adaptation. However, these two factors have
not been achieved. For example, ELPT [24] uses predic-
tion uncertainty [27] to explore active samples and applies
cross-entropy loss to exploit these selected samples. While
the prediction uncertainty is error-prone due to the miscali-
brated source model under distribution shift [32], and the
pseudo-label noise of unlabeled samples easily influence
the effect of standard cross-entropy loss on active samples.
In this paper, we first find the best informative sam-
ples for ASFDA are Minimum Happy (MH) points that sat-
isfy the properties of neighbor-chaotic, individual-different,
and source-dissimilar. (1) The property of neighbor-chaotic
refers to the sample’s neighbor labels being very inconsis-
tent, which measures the sample uncertainty through its en-
vironment. The Active Learning (AL) and Active DA meth-
ods, which rely on the miscalibrated model output [32] or
domain discrepancy, can’t identify these uncertain samples
in ASFDA. As shown in Fig. 1, the samples selected by our
method are more likely to fall into red blocks with label-
chaotic environments than BVSB [14]. (2) The property
of individual-different guarantees the diversity of selected
uncertain samples to improve the effectiveness of querying.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20008
(a) Initial pseudo-labels
(b) BVSB [14]
(c) CoreSet [36]
(d) MHPL
Figure 1. Feature visualization for the source model with 5% actively labeled target data on the Cl →Pr task. Different colors in (a) represent
different classes of pseudo-labels by clustering. Blue blocks include easily-adaptive source-similar samples with label-clean neighbors that
can be learned well by SFDA methods. Red blocks include the hard-adaptive source-dissimilar samples with label-chaotic neighbors. In
(b), (c), and (d), the dark green indicates that the pseudo-label is consistent with the true label, and light blue indicates the opposite. The
red stars indicate the selected samples based on BVSB, CoreSet, and our MHPL.
Previous methods [32,36,38] ensure sample diversity across
the global target domain. However, they would select al-
ready well-aligned source-similar samples [32] that are less
informative for target adaptation as they can be learned by
SFDA methods [16, 44]. Fig. 1 illustrates that compared
with CoreSet [36], most samples selected by our method
are diverse in the source-dissimilar regions (red blocks). (3)
The informative samples should be source-dissimilar, as the
source-dissimilar samples are more representative of the tar-
get domain and need to be explored. Most Active DA meth-
ods [6, 40] ensure source-dissimilar samples based on mea-
suring the distribution discrepancy across domains, which
is unavailable in ASFDA due to unseen source data.
Concerning the exploitation of selected samples, most
methods of AL [10, 34, 36], Active DA [6, 32, 40, 51], and
ELPT [24] deem them as ordinary labeled samples and use
standard supervised losses to learn them. However, the
number of selected samples is so tiny in ASFDA that they
occupy a small region of the target domain. With standard
supervised losses, the model cannot be well generalized to
the entire target domain, leading to poor generalization.
We propose the Minimum Happy Points Learning
(MHPL) to explore and exploit the informative MH points.
First, to measure the sample uncertainty, we propose a novel
uncertainty metric, neighbor environment uncertainty, that
is based on the purity and affinity characteristics of neigh-
bor samples. Then, to guarantee the individual difference,
we propose neighbor diversity relaxation based on perform-
ing relaxed selection among neighbors. Furthermore, the
source-dissimilar characteristic of samples is maintained by
our proposed one-shot querying. We select target samples
at once based on the source model, as the source model
without fine-tuning can better describe the distribution dis-
crepancy across domains and the source-dissimilar samples
are more likely to be explored. In addition, the selected
samples are fully exploited by a new-designed neighbor fo-
cal loss, which assigns the weighted neighbor purity to the
cross-entropy loss of MH points to make the model focus
Different methods56586062646668Classification Accuracy (%)SHOT
SHOT + Random
SHOT + CTC
SHOT + Coreset
SHOT + BADGE
SHOT + Entropy
SHOT + BVSB
SHOT + LC
ELPT
MHPLFigure 2. The comparison of ASFDA baselines (SHOT [25] + *, *
denotes the active strategy), ELPT, and our MHPL with 5% active
labeled target samples on Ar →Cl in the Office-Home.
and learn more about them. As shown in Fig. 2, our MHPL
significantly outperforms the ASFDA baselines, i.e.,an ef-
fective SFDA method (SHOT) + AL strategies, and the ex-
isting state-of-the-art ASFDA approach, ELPT [24].
Our contributions can be summarized as follows: (1) We
discover and define the most informative active samples,
Minimum Happy (MH) points for ASFDA; (2) We propose
a novel MHPL framework to explore and exploit the MH
points with the neighbor environment uncertainty, neighbor
diversity relaxation, one-shot querying, and neighbor focal
loss; (3) Extensive experiments verify that MHPL surpasses
state-of-the-art methods.
|
Xu_A_Unified_Spatial-Angular_Structured_Light_for_Single-View_Acquisition_of_Shape_CVPR_2023 | Abstract
We propose a unified structured light, consisting of an
LED array and an LCD mask, for high-quality acquisition
of both shape and reflectance from a single view. For ge-
ometry, one LED projects a set of learned mask patterns to
accurately encode spatial information; the decoded results
from multiple LEDs are then aggregated to produce a final
depth map. For appearance, learned light patterns are cast
through a transparent mask to efficiently probe angularly -
varying reflectance. Per-point BRDF parameters are differ-
entiably optimized with respect to corresponding measure-
ments, and stored in texture maps as the final reflectance.
We establish a differentiable pipeline for the joint capture to
automatically optimize both the mask and light patterns to-
wards optimal acquisition quality. The effectiveness of our
light is demonstrated with a wide variety of physical ob-
jects. Our results compare favorably with state-of-the-art
techniques.
| 1. Introduction
Joint acquisition of both shape and appearance of a static
object is one key problem in computer vision and computer
graphics. It is critical for various applications, such as cul-
tural heritage, e-commerce and visual effects. Represented
as a 3D mesh and a 6D Spatially-Varying Bidirectional Re-
flectance Distribution Function (SVBRDF), a digitized ob-
ject can be rendered to reproduce the original look in the
virtual world with high fidelity for different view and light-
ing conditions.
Active lighting is widely employed in high-quality ac-
quisition. It probes the physical domain efficiently and
obtains measurements strongly correlated with the target,
leading to high signal-to-noise ratio (SNR) results. For ge-
ometry, structured illumination projects carefully designed
pattern(s) into the space to distinguish rays for accurate
3D triangulation [15,32]. For reflectance, illumination mul-
*Equal contributions.
†Corresponding authors ([email protected]/[email protected]).
CameraLED ArrayObjectLCD MaskValid VolumeLCD MaskObjectLED Array
CameraValid VolumeLCD MaskCameraObjectLED Array
Figure 1. Our hardware prototype. It consists of an LED array, an
LCD mask and a camera (left). One LED can project a set of mask
patterns for shape acquisition (center), and multiple LEDs can be
programmed to cast light patterns through a transparent mask for
reflectance capture (right).
tiplexing programs the intensities of different lights over
time, physically convolving with BRDF slices in the angu-
lar domain to produce clues for precise appearance deduc-
tion [13, 35].
While active lighting for geometry or reflectance alone
has been extensively studied, it is difficult to apply the idea
to joint capture. At one hand, directly combining the two
types of lights ends up with a bulky setup [39] and compet-
ing measurement coverages (i.e., a light for shape capture
cannot be co-located with one for reflectance). On the other
hand, existing work usually adopts one type of active light
only, and has to perform passive acquisition on the other,
leading to sub-optimal reconstructions. For example, Hol-
royd et al. [16] use projectors to capture geometry and im-
pose strong priors on appearance. Kang et al. [19] build a
light cube to densely sample reflectance in the angular do-
main. But the quality of its passive shape reconstruction is
severely limited, if the object surface lacks prominent spa-
tial features.
To tackle the above challenges, we propose a unified
structured light for high-quality acquisition of 3D shape
and reflectance. Our lightweight prototype consists of an
LED array and an LCD mask, which essentially acts as a
restricted lightfield projector to actively probe the spatial
and angular domain. For geometry, each LED projects a
set of mask patterns into the space to encode shape infor-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
206
mation. For appearance, the same LED array produces dif-
ferent light patterns, which are cast through a transparent
mask to sample the reflectance that varies with the light an-
gle. The prototype helps capture sufficient physical infor-
mation to faithfully recover per-pixel depth and reflectance
even from a single view.
To program the novel light towards optimal acquisition
quality, we establish a differentiable pipeline for the joint
capture, so that both mask and light patterns can be auto-
matically optimized to harness our hardware capability. For
geometry, a set of mask patterns are independently learned
for each LED, by minimizing the depth uncertainty along
an arbitrary camera ray. We also exploit the physical con-
volution that causes blurry mask projections in our setup, to
encode richer spatial information for depth disambiguation.
Multiple LEDs can be employed to project different sets of
mask patterns, for improving the completeness and accu-
racy of the final shape. For reflectance, the light patterns are
optimized as part of an autoencoder, which learns to cap-
ture the essence of appearance [19]. The reflectance is then
optimized with respect to the measurements under such pat-
terns, taking into account the reconstructed geometry for a
higher-quality estimation.
The effectiveness of our approach is demonstrated on a
number of physical samples with considerable variations in
shape and reflectance. Using 4×18 = 72 mask patterns and
32 light patterns, we achieve on average a geometric accu-
racy of 0.27mm(mean distance) and a reflectance accuracy
of 0.94(SSIM), on a lightweight prototype with an effective
spatial resolution of only 320×320and an angular resolu-
tion of 64×48. Our results are compared with state-of-the-
art techniques on shape and reflectance capture, as well as
validated against photographs. In addition, we evaluate the
impact of different factors over the final results and discuss
exciting future research directions.
|
Wang_MetaViewer_Towards_a_Unified_Multi-View_Representation_CVPR_2023 | Abstract
Existing multi-view representation learning methods typi-
cally follow a specific-to-uniform pipeline, extracting latent
features from each view and then fusing or aligning them
to obtain the unified object representation. However, the
manually pre-specified fusion functions and aligning criteria
could potentially degrade the quality of the derived represen-
tation. To overcome them, we propose a novel uniform-to-
specific multi-view learning framework from a meta-learning
perspective, where the unified representation no longer in-
volves manual manipulation but is automatically derived
from a meta-learner named MetaViewer. Specifically, we
formulated the extraction and fusion of view-specific latent
features as a nested optimization problem and solved it by us-
ing a bi-level optimization scheme. In this way, MetaViewer
automatically fuses view-specific features into a unified one
and learns the optimal fusion scheme by observing recon-
struction processes from the unified to the specific over all
views. Extensive experimental results in downstream classi-
fication and clustering tasks demonstrate the efficiency and
effectiveness of the proposed method.
| 1. Introduction
Multi-view representation learning aims to learn a uni-
fied representation of the entity from its multiple observable
views for the benefit of downstream tasks [35, 43, 58]. Each
view acquired by different sensors or sources contains both
view-shared consistency information and view-specific in-
formation [8]. The view-specific part further consists of
complementary and redundant components, where the for-
mer can be considered as a supplement to the consistency
information, while the latter is view-private and may be
∗Corresponding authors
Meta
Learning Reconstruction
View 1 View 2
(d) Meta Representation ... ... ... ...
... ... ... ...
Alignment
View 2 View 1
(c) S&S Representation ...
... ...
... ...
... ... (b) Alignment Representation View 1 View 2 ... ... ... ... ... ... ... ...
(a) Joint Representation ...
Fusion
Operation Fusion
View 1 View 2 ... ... ... ... ...
Concatenation Alignment Concatenation Figure 1. (a), (b) and (c) show three multi-view learning frame-
works following the specific-to-uniform pipeline, where the unified
representation is obtained by fusing or concatenating view-specific
features. (d) illustrates our uniform-to-specific manner, where a
meta-learner learns to fusion by observing reconstruction from uni-
fied representation to specific views.
adverse for the unified representation [16]. Therefore, a
high-quality representation is required to retain the consis-
tency and complementary information, as well as filter out
the view-private redundant ones [51].
Given the data containing two views, x1andx2, prevail-
ing multi-view learning methods typically follow a specific-
to-uniform pipeline and can be roughly characterized as:
H:=f(x1;Wf)◦g(x2;Wg), (1)
where fandgare encoding (or embedding [27]) functions
that map the original view data into the corresponding latent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11590
features with the trainable parameters WfandWg. These
latent features are subsequently aggregated into the unified
representation Husing the designed aggregation operator
◦. With different aggregation strategies, existing approaches
can be further subdivided into the joint, alignment, and a
combined share-specific (S &S) representation [22, 26].
Fig. 1 (a) - (c) show the above three branches of the
specific-to-uniform framework. Joint representation focuses
on the integration of complementary information by directly
fusing latent features, where the ◦is represented as fusion
strategies, such as graph-based modules [33], neural net-
works [10], or other elaborate functions [7, 34]. While align-
ment representation seeks alignment between view-specific
features to retain consistency information and specifies ◦as
the alignment operator, measured by distance [11], correla-
tion [24, 47], or similarity [25, 38, 39]. The aligned features
can be concatenated as the unified representation for down-
stream tasks. As a trade-off strategy, S &S representation ex-
plicitly distinguishes latent features into shared and specific
representations and only aligns the shared part [20, 30, 53].
Despite demonstrating promising results, the specific-to-
uniform framework inherently suffers from potential risks
in the following two aspects: (1) Compared with the data-
oriented fusion rules, manually pre-specified rules are de-
signed to compute the unified representation by concatenat-
ing or fusing latent features, restricting its extensibility in
complex real-world applications. (2) Even if we can find
a well-performing fusion scheme, the sequential training
manner limits the model to constrain the various component
information separately. For example, it is difficult to automat-
ically separate out view-private redundant information from
the feature level [22, 37]. Recent researches have attempted
to address the second issue by decomposing latent features
using matrix factorization [61] or hierarchical feature model-
ing [51], but still cannot avoid manually pre-specified fusion
or even decomposition strategies.
In this work, we provide a new meta-learning perspec-
tive for multi-view representation learning and propose a
novel uniform-to-specific framework to address these poten-
tial risks. Fig. 1 (d) shows the overall schematic. In contrast
to the specific-to-uniform pipeline, the unified representation
no longer involves manual manipulation but is automatically
derived from a meta-learner named MetaViewer. To train the
MetaViewer, we first decouple the learning of view-specific
latent features and unified meta representation. These two
learning processes can then be formulated as a nested opti-
mization problem and eventually solved by a bi-level opti-
mization scheme. In detail, MetaViewer fuses view-specific
features into a unified one at the outer level and learns the op-
timal fusion scheme by observing reconstruction processes
from the unified to the specific over all views at the inner
level. In addition, our uniform-to-specific framework is com-
patible with most existing objective functions and pre-texttasks to cope with complex real-world scenarios. Extensive
experiments validate that MetaViewer achieves comparable
performance to the state-of-the-art methods in downstream
clustering and classification tasks. The core contributions of
this work are as follows.
1.We provide a new meta-learning perspective and de-
velop a novel uniform-to-specific framework to learn
a unified multi-view representation. To the best of our
knowledge, it could be the first meta-learning-based
work in multi-view representation learning community.
2.We propose MetaViewer, a meta-learner that formu-
lates the modeling of view-specific features and unified
representation as a nested bi-level optimization and
ultimately meta-learns a data-driven optimal fusion.
3.Extensive experiments on multiple benchmarks validate
that our MetaViewer achieves comparable performance
to the existing methods in two downstream tasks.
|
Wang_JAWS_Just_a_Wild_Shot_for_Cinematic_Transfer_in_Neural_CVPR_2023 | Abstract
This paper presents JAWS, an optimization-driven ap-
proach that achieves the robust transfer of visual cinematic
features from a reference in-the-wild video clip to a newly
generated clip. To this end, we rely on an implicit-neural-
representation (INR) in a way to compute a clip that shares
the same cinematic features as the reference clip. We pro-
pose a general formulation of a camera optimization prob-
lem in an INR that computes extrinsic and intrinsic camera
parameters as well as timing. By leveraging the differen-
tiability of neural representations, we can back-propagate
our designed cinematic losses measured on proxy estima-
tors through a NeRF network to the proposed cinematic
parameters directly. We also introduce specific enhance-
ments such as guidance maps to improve the overall qual-
ity and efficiency. Results display the capacity of our sys-
tem to replicate well known camera sequences from movies,
adapting the framing, camera parameters and timing of the
generated video clip to maximize the similarity with the ref-
erence clip.
| 1. Introduction
Almost all film directors and visual artists follow the
paradigm of watch-and-learn by drawing inspiration from
others visual works. Imitating masterpieces (also known
as visual homage) in their visual composition, camera mo-
tion or character action is a popular way to pay tribute to
and honor the source work which influenced them. This
subtly draws a faint, yet distinctive clue through the devel-
opment of the whole film history. Examples are common-
place: across the dizzy effect of dolly-zoom in Vertigo to
the scary scene in Jaws (see Fig. 1); or from the old school
Bruce Lee’s kung-fu films to blockbusters such as Kill Bill
andMatrix series. The cinematic knowledge encompassed
in reference sequences is therefore carried out and inherited
through these visual homages, and have even been adapted
*Equal contribution. Corresponding to [email protected].
Figure 1. This illustration displays the capacity of our proposed
method to transfer the cinematic motions from the in-the-wild fa-
mous film clip Jaws (bottom) to another context, replicating and
adapting the dolly-zoom effect (a combined translation plus field
of view change in opposite directions, causing distinctive motions
on background and foreground contents).
to more modern visual media such as digital animation or
video games. Audiences obviously acknowledge the strong
references between the epic Western The good, the bad and
the ugly of 1966 and the 2010 Red Dead Redemption .
The creation of such visual homages in real or virtual en-
vironments yet remains a challenging endeavor that requires
more than just replicating a camera angle, motion or visual
compositions. Such a cinematic transfer task is successful
only if it can achieve a similar visual perception ( look-and-
feel) between the reference and homage clip, for example
in terms of visual composition ( i.e. how visual elements are
framed on screen), perceived motion ( i.e. how elements and
scene move through the whole sequence), but also, camera
focus, image depth, actions occurring or scene lighting.
Inspired by the aforementioned watch-and-learn
paradigm, we propose to address the cinematic transfer
problem by focusing on motion characteristics, i.e. solving
acinematic motion transfer problem.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16933
While different techniques are available to extract cam-
era poses and trajectories from reference videos ( e.g. sparse
or dense localization and mapping techniques) and poten-
tially transfer them, the naive replication of such trajectories
to new 3D environments (mesh-based or implicit NeRF-
based) generally fail to reproduce the perceived motion due
to scaling issues, different screen compositions, lack of vi-
sual anchors, or scene dissimilarities. Furthermore, these
extraction techniques are very sensitive to widespread cam-
era effects such as shallow depths of field or motion blur.
In this paper, we propose to address this cinematic mo-
tion transfer problem following a different path. Rather than
extracting visual features from the reference clip (as geo-
metric properties or encoded in a latent representation) and
designing a technique to then recompute a new video clip
using these visual features, we rely on the differentiable na-
ture of NeRF representations. We propose the design of
a fully differentiable pipeline which takes as input a refer-
ence clip, an existing NeRF representation of a scene, and
optimizes a sequence of camera parameters (pose and focal
length) in both space and time inside the NeRF so as to min-
imize differences between the motion features of the refer-
ences views and the clip created from the optimized param-
eters. By exploiting an end-to-end differentiable pipeline,
our process directly backpropagates the changes to spatial
and temporal cinematic parameters. The key to successful
cinematic motion transfer is then found in the design of rel-
evant motion features and means to improve guidance in
the optimization. Our work relies on the combination of
an optical flow estimator, to ensure the transfer of camera
directions of motions, and a character pose estimator to en-
sure the anchoring of motions around a target. A dedicated
guidance map is then created to draw the attention of the
framework on key aspects of the extracted features.
The contributions of our work are:
The first feature-driven cinematic motion transfer tech-
nique that can reapply motion characteristics of an in-the-
wild reference clip to a NeRF representation.
The design of an end-to-end differentiable pipeline to di-
rectly optimize spatial and temporal cinematic parameters
from a reference clip, by exploiting differentiability of neu-
ral rendering and proxy networks.
The proposal of robust cinematic losses combined with
guidance maps that ensure the effective transfer of both on-
screen motions and character framing to keep the cinematic
visual similarity of the reference and generated clip.
|
Voynov_Multi-Sensor_Large-Scale_Dataset_for_Multi-View_3D_Reconstruction_CVPR_2023 | Abstract
We present a new multi-sensor dataset for multi-view 3D
surface reconstruction. It includes registered RGB and depth
data from sensors of different resolutions and modalities:
smartphones, Intel RealSense, Microsoft Kinect, industrial
cameras, and structured-light scanner. The scenes are se-
lected to emphasize a diverse set of material properties
challenging for existing algorithms. We provide around
1.4 million images of 107 different scenes acquired from 100
viewing directions under 14 lighting conditions. We expect
our dataset will be useful for evaluation and training of 3D
reconstruction algorithms and for related tasks. The dataset
is available at skoltech3d.appliedai.tech .
*Work partially performed while at Skolkovo Institute of Science and
Technology. | 1. Introduction
Sensor data used in 3D reconstruction range from highly
specialized and expensive CT, laser, and structured-light
scanners to video from commodity cameras and depth sen-
sors; computational 3D reconstruction methods are typically
tailored to a specific type of sensors. Yet, even commod-
ity hardware increasingly provides multi-sensor data: for
example, many recent phones have multiple RGB cameras
as well as lower resolution depth sensors. Using data from
different sensors, RGB-D data in particular, has the potential
to considerably improve the quality of 3D reconstruction.
For example, multi-view stereo algorithms ( e.g., [55,82])
produce high-quality 3D geometry from RGB data, but may
miss featureless surfaces; supplementing RGB images with
depth sensor data makes it possible to have more complete
reconstructions. Conversely, commodity depth sensors often
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21392
lack resolution provided by RGB cameras.
Learning-based techniques substantially simplify the chal-
lenging task of combining data fom multiple sensors. How-
ever, learning methods require suitable data for training. Our
dataset aims to complement existing ones ( e.g., [13,28,31,
58,63,83]), as discussed in Section 2, most importantly, by
providing multi-sensor data and high-accuracy ground truth
for objects with challenging reflection properties.
The structure of our dataset is expected to benefit research
on 3D reconstruction in several ways.
•Multi-sensor data. We provide data from seven differ-
ent devices with high-quality alignment, including low-
resolution depth data from commodity sensors (phones,
Kinect, RealSense), high-resolution geometry data from a
structured-light scanner, and RGB data at different resolu-
tions and from different cameras. This enables supervised
learning for reconstruction methods relying on different
combinations of sensor data, in particular, increasingly
common combination of high-resolution RGB with low-
resolution depth data. In addition, multi-sensor data sim-
plifies comparison of methods relying on different sensor
types (RGB, depth, and RGB-D).
•Lighting and pose variability. We chose to focus on a
setup with controlled (but variable) lighting and a fixed set
of camera positions for all scenes, to enable high-quality
alignment of data from multiple sensors, and systematic
comparisons of algorithm sensitivity to various factors. We
aimed to make the dataset large enough (1.39 M images of
different modalities in total) to support training machine
learning algorithms, and provide systematic variability
in camera poses (100 per object), lighting (14 lighting
setups, including “hard” and diffuse lighting, flash, and
backlighting, as illustrated in Figure 1b), and reflection
properties these algorithms need.
•Object selection. Among 107 objects in our dataset, we
include primarily objects that may present challenges to
existing algorithms mentioned above (see examples in
Figure 1c); we made special effort to improve quality of
3D high-resolution structured-light data for these objects.
Our focus is on RGB and depth data for individual objects
in laboratory setting, similar to the DTU dataset [ 28], rather
than on complex scenes with natural lighting, as in Tanks
and Temples [ 31] or ETH3D [ 58]. This provides means for
systematic exploration and isolation of different factors con-
tributing to strengths and weaknesses of different algorithms,
and complements more holistic evaluation and training data
provided by datasets with complex scenes.
|
Wang_A_Practical_Stereo_Depth_System_for_Smart_Glasses_CVPR_2023 | Abstract
We present the design of a productionized end-to-end
stereo depth sensing system that does pre-processing, on-
line stereo rectification, and stereo depth estimation with a
fallback to monocular depth estimation when rectification is
unreliable. The output of our depth sensing system is then
used in a novel view generation pipeline to create 3D com-
putational photography effects using point-of-view images
captured by smart glasses. All these steps are executed on-
device on the stringent compute budget of a mobile phone,
and because we expect the users can use a wide range of
smartphones, our design needs to be general and cannot
*Email: [email protected]
†work was done at Metabe dependent on a particular hardware or ML accelerator
such as a smartphone GPU. Although each of these steps
is well studied, a description of a practical system is still
lacking. For such a system, all these steps need to work in
tandem with one another and fallback gracefully on failures
within the system or less than ideal input data. We show
how we handle unforeseen changes to calibration, e.g., due
to heat, robustly support depth estimation in the wild, and
still abide by the memory and latency constraints required
for a smooth user experience. We show that our trained
models are fast, and run in less than 1s on a six-year-old
Samsung Galaxy S8 phone’s CPU. Our models generalize
well to unseen data and achieve good results on Middlebury
and in-the-wild images captured from the smart glasses.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21498
| 1. Introduction
Stereo disparity estimation is one of the fundamental
problems in computer vision, and it has a wide variety of
applications in many different fields, such as AR/VR, com-
putational photography, robotics, and autonomous driving.
Researchers have made significant progress in using neural
networks to achieve high accuracy in benchmarks such as
KITTI [21], Middlebury [28] and ETH3D [30].
However, there are many practical challenges in using
stereo in an end-to-end depth sensing system. We present
a productionized system in this paper for smart glasses
equipped with two front-facing stereo cameras. The smart
glasses are paired with a mobile phone, so the main com-
putation happens on the phone. The end-to-end system
does pre-processing, online stereo rectification, and stereo
depth estimation. In the event that the rectification fails,
we fallback on monocular depth estimation. The output of
the depth sensing system is then fed to a rendering pipeline
to create three-dimensional computational photographic ef-
fects from a single user capture. To our knowledge there is
limited existing literature discussing how to design such an
end-to-end system running on a very limited computational
budget.
The main goal of our system is to achieve the best
user experience to create the 3D computational photogra-
phy effects. We need to support any mainstream phone the
user chooses to use and capture any type of surroundings.
Therefore, the system needs to be robust and operate on a
very limited computational budget. Nevertheless, we show
that our trained model achieves on-par performance with
state-of-the-art (SotA) networks such as RAFT-stereo [18],
GA-Net [45], LEA-stereo [3] and MobileStereoNet on the
task of zero-shot depth estimation on the Middlebury 2014
dataset [28] despite our network being orders of magnitude
faster than these methods.
The main technical and system contributions are:
1. We describe an end-to-end stereo system with careful
design choices and fallback plans. Our design strate-
gies can be a baseline for other similar depth systems.
2. We introduce a novel online rectification algorithm
that is fast and robust.
3. We introduce a novel strategy to co-design a stereo net-
work and a monocular depth network to make both net-
works’ output format similar.
4. We show that our quantized network achieves compet-
itive accuracy on a tight compute budget.
|
Xing_SVFormer_Semi-Supervised_Video_Transformer_for_Action_Recognition_CVPR_2023 | Abstract
Semi-supervised action recognition is a challenging but
critical task due to the high cost of video annotations. Exist-
ing approaches mainly use convolutional neural networks,
yet current revolutionary vision transformer models have
been less explored. In this paper, we investigate the use
of transformer models under the SSL setting for action
recognition. To this end, we introduce SVFormer, which
adopts a steady pseudo-labeling framework ( i.e., EMA-
Teacher) to cope with unlabeled video samples. While a
wide range of data augmentations have been shown effec-
tive for semi-supervised image classification, they generally
produce limited results for video recognition. We there-
fore introduce a novel augmentation strategy, Tube Token-
Mix, tailored for video data where video clips are mixed
via a mask with consistent masked tokens over the temporal
axis. In addition, we propose a temporal warping augmen-
tation to cover the complex temporal variation in videos,
which stretches selected frames to various temporal dura-
tions in the clip. Extensive experiments on three datasets
Kinetics-400, UCF-101, and HMDB-51 verify the advan-
tage of SVFormer. In particular, SVFormer outperforms
the state-of-the-art by 31.5% with fewer training epochs
under the 1% labeling rate of Kinetics-400. Our method
can hopefully serve as a strong benchmark and encour-
age future search on semi-supervised action recognition
with Transformer networks. Code is released at https:
//github.com/ChenHsing/SVFormer .
| 1. Introduction
Videos have gradually replaced images and texts on In-
ternet and grown at an exponential rate. On video websites
such as YouTube, millions of new videos are uploaded ev-
ery day. Supervised video understanding works [4, 15, 17,
29, 34, 56, 70] have achieved great successes. They rely on
†Corresponding author.
0102030405060
UCF101-1%Kinetics400-1%Top-1 AccuracySupervised (R-50)Supervised (ViT-S)CMPLSVFormer-SSVFormer-BFigure 1. Comparison of our method with the supervised baseline
and previous state-of-the-art SSL method [64]. SVFormer signif-
icantly outperforms previous methods under the case with very
little labeled data.
large-scale manual annotations, yet labeling so many videos
is time-consuming and labor-intensive. How to make use of
unlabeled videos that are readily available for better video
understanding is of great importance [25, 38, 39].
In this spirit, semi-supervised action recognition [25,
40, 57] explores how to enhance the performance of deep
learning models using large-scale unlabeled data. This
is generally done with labeled data to pretrain the net-
works [57,64], and then leveraging the pretrained models to
generate pseudo labels for unlabeled data, a process known
as pseudo labeling. The obtained pseudo labels are further
used to refine the pretrained models. In order to improve
the quality of pseudo labeling, previous methods [57, 62]
use additional modalities such as optical flow [3] and tem-
poral gradient [50], or introduce auxiliary networks [64] to
supervise unlabeled data. Though these methods present
promising results, they typically require additional training
or inference cost, preventing them from scaling up.
Recently, video transformers [2,4,34] have shown strong
results compared to CNNs [15, 17, 22]. Though great suc-
cess has been achieved, the exploration of transformers on
semi-supervised video tasks remains unexplored. While it
sounds appealing to extend vision transformers directly to
SSL, a previous study shows that transformers perform sig-
nificantly worse compared to CNNs in the low-data regime
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18816
due to the lack of inductive bias [54]. As a result, directly
applying SSL methods, e.g., FixMatch [41], to ViT [13]
leads to an inferior performance [54].
Surprisingly, in the video domain, we observe that
TimeSformer, a popular video Transformer [4], initialized
with weights from ImageNet [11], demonstrates promising
results even when annotations are limited [37]. This en-
courages us to explore the great potential of transformers
for action recognition in the SSL setting.
Existing SSL methods generally use image augmenta-
tions ( e.g., Mixup [67] and CutMix [66]) to speed up con-
vergence under limited label resources. However, such
pixel-level mixing strategies are not perfectly suitable for
transformer architectures, which operate on tokens pro-
duced by patch splitting layers. In addition, strategies
like Mixup and CutMix are particularly designed for image
tasks, which fail to consider the temporal nature of video
data. Therefore, as will be shown empirically, directly us-
ing Mixup or CutMix for semi-supervised action recogni-
tion leads to unsatisfactory performance.
In this work, we propose SVFormer, a transformer-based
semi-supervised action recognition method. Concretely,
SVFormer adopts a consistency loss that builds two dif-
ferently augmented views and demands consistent predic-
tions between them. Most importantly, we propose Tube
TokenMix (TTMix), an augmentation method that is natu-
rally suitable for video Transformer. Unlike Mixup and Cut-
Mix, Tube TokenMix combines features at the token-level
after tokenization via a mask, where the mask has consistent
masked tokens over the temporal axis. Such a design could
better model the temporal correlations between tokens.
Temporal augmentations in literatures ( e.g. varying
frame rates) only consider simple temporal scaling or shift-
ing, neglecting the complex temporal changes of each part
in human action. To help the model learn strong temporal
dynamics, we further introduce the Temporal Warping Aug-
mentation (TWAug), which arbitrarily changes the temporal
length of each frame in the clip. TWAug can cover the com-
plex temporal variation in videos and is complementary to
spatial augmentations [10]. When combining TWAug with
TTMix, significant improvements are achieved.
As shown in Fig. 1, SVFormer achieves promising re-
sults in several benchmarks. (i) We observe that the super-
vised Transformer baseline is much better than the Conv-
based method [22], and is even comparable with the 3D-
ResNet state-of-the-art method [64] on Kinetics400 when
trained with 1% of labels. (ii) SVFormer-S significantly
outperforms previous state-of-the-arts with similar param-
eters and inference cost, measured by FLOPs. (iii) Our
method is also effective for the larger SVFormer-B model.
Our contributions are as follows:
•We are the first to explore the transformer model for
semi-supervised video recognition. Unlike SSL forimage recognition with transformers, we find that us-
ing parameters pretrained on ImageNet is of great im-
portance to ensure decent results for action recognition
in the low-data regime.
•We propose a token-level augmentation Tube Token-
Mix, which is more suitable for video Transformer
than pixel-level mixing strategies. Coupled with Tem-
poral Warping Augmentation, which improves tempo-
ral variations between frames, TTMix achieves signif-
icant boost compared with image augmentation.
•We conduct extensive experiments on three benchmark
datasets. The performances of our method in two
different sizes ( i.e., SVFormer-B and SVFormer-S)
outperform state-of-the-art approaches by clear mar-
gins. Our method sets a strong baseline for future
transformer-based works.
|
Xiong_Learning_Compact_Representations_for_LiDAR_Completion_and_Generation_CVPR_2023 | Abstract
LiDAR provides accurate geometric measurements of the
3D world. Unfortunately, dense LiDARs are very expensive
and the point clouds captured by low-beam LiDAR are often
sparse. To address these issues, we present UltraLiDAR, a
data-driven framework for scene-level LiDAR completion,
LiDAR generation, and LiDAR manipulation. The crux of
UltraLiDAR is a compact, discrete representation that en-
codes the point cloud’s geometric structure, is robust to
noise, and is easy to manipulate. We show that by aligning
the representation of a sparse point cloud to that of a dense
point cloud, we can densify the sparse point clouds as if they
were captured by a real high-density LiDAR, drastically re-
ducing the cost. Furthermore, by learning a prior over the
discrete codebook, we can generate diverse, realistic LiDARpoint clouds for self-driving. We evaluate the effectiveness
of UltraLiDAR on sparse-to-dense LiDAR completion and
LiDAR generation. Experiments show that densifying real-
world point clouds with our approach can significantly im-
prove the performance of downstream perception systems.
Compared to prior art on LiDAR generation, our approach
generates much more realistic point clouds. According to
A/B test, over 98.5% of the time human participants pre-
fer our results over those of previous methods. Please re-
fer to project page https://waabi.ai/research/
ultralidar/ for more information.
| 1. Introduction
Building a robust 3D perception system is key to bring-
ing self-driving vehicles into our daily lives. To effectively
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1074
perceive their surroundings, existing autonomous systems
primarily exploit LiDAR as the major sensing modality,
since it can capture well the 3D geometry of the world.
However, while LiDAR provides accurate geometric mea-
surements, it comes with two major limitations: (i) the cap-
tured point clouds are inherently sparse; and (ii) the data
collection process is difficult to scale up.
Sparsity: Most popular self-driving LiDARs are time-of-
flight and scan the environment by rotating emitter-detector
pairs ( i.e., beams) around the azimuth. At every time step,
each emitter emits a light pulse which travels until it hits a
target, gets reflected, and is received by the detector. Dis-
tance is measured by calculating the time of travel. Due
to the design, the captured point cloud density inherently
decreases as the distance to the sensor increases. For dis-
tant objects, it is often that only a few LiDAR points are
captured, which greatly increases the difficulty for 3D per-
ception. The sparsity problem becomes even more severe
under poor weather conditions [47], or when LiDAR sen-
sors have fewer beams [4]. One “simple” strategy is to in-
crease the number of LiDAR beams. However, 128-beam
LiDAR sensors are much more expensive than their 64/32-
beam counterparts, not to mention that 512-beam LiDAR
does not exist yet.
Scalability: Training and testing perception systems in
diverse situations are crucial for developing robust au-
tonomous systems. However, due to their intricate design,
LiDARs are much more expensive than cameras. A com-
mercial 64-beam LiDAR usually costs over 25K USD. The
price barrier makes LiDAR less accessible to the general
public and restricts data collection to a small fraction of ve-
hicles that populate our roads, significantly hindering scal-
ing up. One way to circumvent the issue is to leverage exist-
ing LiDAR simulation suites to generate more data. While
the simulated point clouds are realistic, these systems typ-
ically require one to manually create the scene or rely on
multiple scans of the real world in advance, making such a
solution less desirable.
With these challenges in mind, we propose UltraLiDAR,
a novel framework for LiDAR completion and generation.
The key idea is to learn a compact, discrete 3D represen-
tation (codebook) of LiDAR point clouds that encodes the
geometric structure of the scene and the physical rules of
our world ( e.g., occlusion). Then, by aligning the represen-
tation of a sparse point cloud to that of a dense point cloud,
we can densify the sparse point cloud as if it were captured
by a high-density LiDAR ( e.g., 512-beam LiDAR). Further-
more, we can learn a prior over the discrete codebook and
generate novel, realistic driving scenes by sampling from it;
we can also manipulate the discrete code of the scene and
produce counterfactual scenarios, both of which can drasti-
cally improve the diversity and amount of LiDAR data. Fig.
1 shows some example outputs of UltraLiDAR.We demonstrate the effectiveness of UltraLiDAR on two
tasks: sparse-to-dense LiDAR completion and LiDAR gen-
eration. For LiDAR completion, since there is no ground
truth for high-density LiDAR, we exploit the performance
of downstream perception tasks as a “proxy” measure-
ment. Specifically, we evaluate 3D detection models on
both sparse and densified point clouds and measure the per-
formance difference. Experiments on Pandaset [46] and
KITTI-360 [27] show that our completion model can gen-
eralize across datasets and the densified results can signifi-
cantly improve the performance of 3D detection models. As
for LiDAR generation, we compare our results with state-
of-the-art LiDAR generative models [3, 53]. Our generated
point clouds better match the statistics of those of ground
truth data. We also conducted a human study where par-
ticipants prefer our method over prior art over 98.5% (best
100%) of the time; comparing to ground truth, our results
were selected 32% of the time (best 50%).
To summarize, the main contributions of this paper are:
1. We present a LiDAR representation that can effectively
capture data priors and enable various downstream ap-
plications, e.g., LiDAR completion and generation.
2. We propose a sparse-to-dense LiDAR completion
pipeline that can accurately densify sparse point clouds
and improve the performance and generalization abil-
ity of trained detection models across datasets.
3. We develop an (un)conditional LiDAR generative
model that can sythesize high-quality LiDAR point
clouds and supports various manipulations.
|
Wu_CORA_Adapting_CLIP_for_Open-Vocabulary_Detection_With_Region_Prompting_and_CVPR_2023 | Abstract
Open-vocabulary detection (OVD) is an object detection
task aiming at detecting objects from novel categories be-
yond the base categories on which the detector is trained.
Recent OVD methods rely on large-scale visual-language
pre-trained models, such as CLIP , for recognizing novel
objects. We identify the two core obstacles that need to
be tackled when incorporating these models into detec-
tor training: (1) the distribution mismatch that happens
when applying a VL-model trained on whole images to
region recognition tasks; (2) the difficulty of localizing
objects of unseen classes. To overcome these obstacles,
we propose CORA, a DETR-style framework that adapts
CLIP for Open-vocabulary detection by Region prompt-
ing and Anchor pre-matching. Region prompting mitigates
the whole-to-region distribution gap by prompting the re-
gion features of the CLIP-based region classifier. Anchor
pre-matching helps learning generalizable object localiza-
tion by a class-aware matching mechanism. We evaluate
CORA on the COCO OVD benchmark, where we achieve
41.7 AP50 on novel classes, which outperforms the pre-
vious SOTA by 2.4 AP50 even without resorting to extra
training data. When extra training data is available, we
train CORA+on both ground-truth base-category annota-
tions and additional pseudo bounding box labels computed
by CORA. CORA+achieves 43.1 AP50 on the COCO OVD
benchmark and 28.1 box APr on the LVIS OVD benchmark.
The code is available at https://github.com/tgxs002/CORA.
| 1. Introduction
Object detection is a fundamental vision problem that
involves localizing and classifying objects from images.
Classical object detection requires detecting objects from a
closed set of categories. Extra annotations and training are
required if objects of unseen categories need to be detected.
It has attracted much attention on detecting novel categorieswithout tedious annotations, or even detect object from new
category, which is currently referred as open-vocabulary de-
tection (OVD) [36].
Recent advances on large-scale vision-language pre-
trained models, such as CLIP [30], enable new solutions for
tackling OVD. CLIP learns a joint embedding space of im-
ages and text from a large-scale image-text dataset, which
shows remarkable capability on visual recognition tasks.
The general idea of applying CLIP for OVD is to treat it
as an open-vocabulary classifier. However, there are two
obstacles hindering the effective use of CLIP on tackling
OVD.
How to adapt CLIP for region-level tasks? One trivial
solution is to crop regions and treat them as separate im-
ages, which has been adopted by multiple recent works
[7, 14, 31, 35]. But the distribution gap between region
crops and full images leads to inferior classification accu-
racy. MEDet [7] mitigates this issue by augmenting the
text feature with image features. However, it requires extra
image-text pairs to prevent overfitting to so-called “base”
classes that are seen during training. RegionCLIP [40] di-
rectly acquires regional features by RoIAlign [17], which
is more efficient but cannot generalize well to novel classes
without finetuning. The finetuning is costly when adopting
a larger CLIP model.
How to learn generalizable object proposals? ViLD [14],
OV-DETR [35], Object-Centric-OVD [31], Region-
CLIP [40] need RPN or class-agnostic object detectors [29]
to mine potential novel class objects. However, these RPNs
are strongly biased towards the base classes on which they
are trained, while perform poorly on the novel classes.
MEDet [7] and VL-PLM [39] identify this problem and
adopt several handcrafted policies to rule out or merge
low-quality boxes, but the performance is still bounded by
the frozen RPN. OV-DETR [35] learns generalizable object
localization by conditioning box regression on class name
embeddings, but at the cost of efficiency issue induced by
repetitive per-class inference.
In this work, we propose a new framework based on DE-
tection TRansformers (DETR) [6] that incorporates CLIP
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7031
into detector training to achieve open-vocabulary detection
without additional image-text data. Specifically, we use a
DETR-style object localizer for class-aware object localiza-
tion, and the predicted boxes are encoded by pooling the in-
termediate feature map of the CLIP image encoder, which
are classified by the CLIP text encoder with class names.
However, there is a distribution gap between whole-image
features from CLIP’s original visual encoder and the newly
pooled region features, leading to an inferior classification
accuracy. Thus, we propose Region Prompting to adapt the
CLIP image encoder, which boosts the classification perfor-
mance, and also demonstrates better generalization capabil-
ity than existing methods. We adopt DAB-DETR [26] as
the localizer, in which object queries are associated with dy-
namic anchor boxes. By pre-matching the dynamic anchor
boxes with the input categories before box regression (An-
chor Pre-Matching), class-aware regression can be achieved
without the cost of repetitive per-class inference.
We validate our method on COCO [24] and LVIS
v1.0 [16] OVD benchmarks. On the COCO OVD bench-
mark, our method improves AP50 of novel categories over
the previous best method [40] by 2.4 AP50 without train-
ing on extra data, and achieves consistent gain on CLIP
models of different scales. When compared under a fairer
setting with extra training data, our method significantly
outperforms the existing methods by 3.8 AP50 on novel
categories and achieves comparable performance on base
categories. On the LVIS OVD benchmark, our method
achieves 22.2/28.1 APr with/w.o. extra data, which signif-
icantly outperforms existing methods that are also trained
with/w.o. extra data. By applying region prompting on the
base classes of COCO, the classification performance on
the novel classes is boosted from 63.9% to 74.1%, whereas
other prompting or adaptation methods easily bias towards
the base classes.
The contributions of this work are summarized as fol-
lows: (1) Our proposed region prompting effectively miti-
gates the gap between whole image features and region fea-
tures, and generalize well in the open-vocabulary setting.
(2) Anchor Pre-Matching enables DETR for generalizable
object localization efficiently. (3) We achieve state-of-the-
art performance on COCO and LVIS OVD benchmarks.
|
Wang_Complete_3D_Human_Reconstruction_From_a_Single_Incomplete_Image_CVPR_2023 | Abstract
This paper presents a method to reconstruct a complete
human geometry and texture from an image of a person with
only partial body observed, e.g., a torso. The core challenge
arises from the occlusion: there exists no pixel to reconstruct
where many existing single-view human reconstruction meth-
ods are not designed to handle such invisible parts, leading
to missing data in 3D. To address this challenge, we intro-
duce a novel coarse-to-fine human reconstruction framework.
For coarse reconstruction, explicit volumetric features are
learned to generate a complete human geometry with 3D con-
volutional neural networks conditioned by a 3D body model
and the style features from visible parts. An implicit network
combines the learned 3D features with the high-quality sur-
face normals enhanced from multiviews to produce fine local
details, e.g., high-frequency wrinkles. Finally, we perform
progressive texture inpainting to reconstruct a complete ap-
pearance of the person in a view-consistent way, which is not
possible without the reconstruction of a complete geometry.
In experiments, we demonstrate that our method can recon-
struct high-quality 3D humans, which is robust to occlusion.
| 1. Introduction
How many portrait photos in your albums have the whole
body captured? Usually, the answer is not many. Taking a
photo of the whole body is often limited by a number of
factors of occlusion such as camera angles, objects, other
people, and self. While existing single-view human recon-
struction methods [3,43] have shown promising results, they
often fail to handle such incomplete images, leading to sig-
nificant artifacts with distortion and missing data in 3D for
invisible body parts. In this paper, we introduce a method
to reconstruct a complete 3D human model from a single
image of a person with occlusions as shown in Figure 1.
The complete 3D model can be the foundation for a wide
range of applications such as film production, video games,
virtual teleportation, and 3D avatar printing from a group-
shot photo. 3D human reconstruction from an image [2, 16]
Input: incomplete image Output: complete 3D human modelFigure 1. The complete reconstruction results using our method
from the image of a person with occlusion by other people.
has been studied for two decades. The recent progress in
this topic indicates the neural network based implicit ap-
proach [3, 44] is a promising way for accurate detail recon-
struction. Such an approach often formulated the 3D human
reconstruction problem as a classification task: an implicit
network is designed to learn image features at each pixel,
e.g., pixel-aligned features [18, 43, 44], which enable con-
tinual classification of the position in 3D along the camera
ray. While the implicit approaches have shown a strong per-
formance to produce the geometry with high-quality local
details, the learning of such an implicit model is often char-
acterized as 1) reconstructive: it estimates 3D only for the
pixels that are captured from a camera, i.e., no 3D recon-
struction is possible for missing pixels of invisible parts; and
2) globally incoherent: the ordinal relationship (front-back
relationship in 3D) of the reconstructed 3D points is often
not globally coherent, e.g., while the reconstruction of the
face surface is locally plausible, its combination with other
parts such as torso looks highly distorted. These properties
fundamentally limit the implicit network to reconstruct the
complete and coherent 3D human model from the image
with a partial body.
In this paper, we overcome these fundamental limitations
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8748
of the implicit network by modeling generative and globally
coherent 3D volumetric features. To this end, we use a 3D
convolutional neural network that can explicitly capture the
global ordinal relation of a human body in the canonical 3D
volume space. It generates volumetric features by encoding
an incomplete image and a 3D body model, i.e., SMPL [30,
37], where the 3D body model provides the unified guidance
of the body pose in the coherent 3D space. These volumetric
features are jointly learned with a 3D discriminator in a way
that generates a coarse yet complete 3D geometry.
The complete 3D geometry enables the coherent render-
ing of its shape over different viewpoints, which makes it
possible to enhance surface normals and inpaint textures
in a multiview-consistent way. Specifically, for surface nor-
mal enhancement, a neural network takes as input a coarse
rendering of the surface normal and style features; and out-
puts the fine surface normal with plausible high-frequency
details. We design a novel normal fusion network that can
combine the fine surface normals from multiviews with the
learned volumetric features to upgrade the quality of local
geometry details. For texture inpainting, a neural network
conditioned on the fine surface normal and an incomplete
image generates the complete textures. The inpainted tex-
tures are progressively combined from multiviews through
the 3D geometry.
Unlike previous methods [43, 43, 44, 52] which have uti-
lized the surface normals from limited views ( e.g., front and
back), our multiview normal fusion approach can produce
more coherent and refined reconstruction results by incorpo-
rating fine-grained surface normals from many views.
Our experiments demonstrate that our method can ro-
bustly reconstruct a complete 3D human model with plausi-
ble details from the image of a partial body, outperforming
previous methods while still obtaining comparable results in
the full-body case.
The technical contributions of this work include (1) a
new design of generative and coherent volumetric features
which make an implicit network possible to reconstruct a
complete 3D human from an incomplete image; (2) a novel
multiview normal fusion approach that upgrades the quality
of local geometry details in a view-coherent way; and (3) an
effective texture inpainting pipeline using the reconstructed
3D geometry.
|
Wang_Dynamically_Instance-Guided_Adaptation_A_Backward-Free_Approach_for_Test-Time_Domain_Adaptive_CVPR_2023 | Abstract
In this paper, we study the application of Test-time
domain adaptation in semantic segmentation (TTDA-Seg)
where both efficiency and effectiveness are crucial. Exist-
ing methods either have low efficiency (e.g., backward op-
timization) or ignore semantic adaptation (e.g., distribution
alignment). Besides, they would suffer from the accumu-
lated errors caused by unstable optimization and abnormal
distributions. To solve these problems, we propose a novel
backward-free approach for TTDA-Seg, called Dynamically
Instance-Guided Adaptation (DIGA). Our principle is uti-
lizing each instance to dynamically guide its own adapta-
tion in a non-parametric way, which avoids the error ac-
cumulation issue and expensive optimizing cost. Specifi-
cally, DIGA is composed of a distribution adaptation mod-
ule (DAM) and a semantic adaptation module (SAM), en-
abling us to jointly adapt the model in two indispensable
aspects. DAM mixes the instance and source BN statistics to
*Corresponding authorencourage the model to capture robust representation. SAM
combines the historical prototypes with instance-level pro-
totypes to adjust semantic predictions, which can be asso-
ciated with the parametric classifier to mutually benefit the
final results. Extensive experiments evaluated on five target
domains demonstrate the effectiveness and efficiency of the
proposed method. Our DIGA establishes new state-of-the-
art performance in TTDA-Seg. Source code is available at:
https://github.com/Waybaba/DIGA.
| 1. Introduction
Semantic segmentation (Seg) [3, 40, 45, 46, 49] is a fun-
damental task in computer vision, which is an important
step in the visual-based robot, autonomous driving and etc.
Modern deep-learning techniques have achieved impressive
success in segmentation. However, one serious drawback
of them is that the segmentation models trained on one
dataset (source domain) may undergo catastrophic perfor-
mance degradation when applied to another dataset sam-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24090
pled from a different distribution. This phenomenon will be
even more serious under complex and ever-changing con-
texts, e.g., autonomous driving.
To solve this well-known problem caused by domain
shifts, researchers have devoted great effort to domain gen-
eralization (DG) [6,11,18,19,21,30] and domain adaptation
(DA) [23,47,47,50]. Specifically, DG aims to learn general-
ized models with only labeled source data. Traditional DA
attempts to adapt the model on the target domain by using
both labeled source data and unlabeled target data. How-
ever, both learning paradigms have their own disadvantages.
The performance of DG is limited especially when evalu-
ated on a domain with a large gap from the source since
it does not leverage target data [11]. DA assumes that the
unlabeled target data are available in advance and can be
chronically exploited to improve target performance. This
assumption, however, can not always be satisfied in real-
world applications. For example, when driving in a new
city, the data are incoming sequentially and we expect the
system to dynamically adapt to the ever-changing scenario.
To meet the real-world applications, [41] introduces the
test-time domain adaptation (TTDA), which aims at adapt-
ing the model during the testing phase in an online fashion
(see Fig. 1 Top). Generally, existing methods can be divided
into two categories: backward-based methods [1,22,27,37,
41] and backward-free methods [15,25,28,33]. The former
category (see Fig. 1 (a)) focuses on optimizing the parame-
ters of models with self-supervision losses, such as entropy
loss [27, 41]. In this way, both distribution adaptation and
semantic adaptation can be achieved, which however has
the following drawbacks. (1) Low-Efficiency : Due to the
requirement of back-propagation, the computation cost will
be multiplied, leading to low efficiency. (2) Unstable Op-
timization & Error Accumulation: Since the gradient is
calculated with single sample by weak supervision, the ran-
domness could be high thus leading to unstable optimiza-
tion. Although this problem can be mitigated in some cer-
tain by increasing the testing batch size, it still cannot be
solved well. In such cases, the accumulated errors may lead
the model to forget the original well-learned knowledge and
thus cause performance degradation.
The second category aims to adapt the model in the dis-
tribution level by updating statistics in batch normalization
(BN) [25] layers, which is very efficient as it is directly im-
plemented in forward propagation with a light computation
cost. Instance normalization [28] (see Fig. 1 (b)) directly
replaces the source statistics with those from each instance,
which is sensitive to the target variations due to discard-
ing the basic source knowledge and thus is unstable. Mirza
et al [25] (see Fig. 1 (c)) study the impacts of updating
the historical statistics by instance statistics with fixed mo-
mentum or dynamically fluctuating momentum. However,
these methods also suffer from the error accumulation is-
SAM
Plug-In Classifier
HeadFeature
Head DAM
Plug-InDIGA
ModelSour ce
Model
ConvertBN
BNFigure 2. Illustration of the implementation of our DIGA. Given a
source model, our DIGA can be readily equipped with only access
to the BN layers, classifier head and feature head.
sue caused by abnormal target distributions as well as the
neglect of semantic adaptation, both of which will result in
inferior adaptation performance.
To this end, we propose a holistic approach (see
Fig. 1 (d)), called Dynamically Instance-Guided Adaptation
(DIGA), for TTDA-Seg, which takes into account both ef-
fectiveness and efficiency. The main idea of DIGA is lever-
aging each instance to dynamically its own adaptation in a
non-parametric manner, which is efficient and can largely
avoid the error accumulation issue. In addition, our DIGA
is implemented in a considerate manner by injecting with
distribution adaptation module (DAM) and semantic adap-
tation module (SAM). Specifically, in DAM, we compute
the weighed sum of the source and current statistics in BN
layers to adapt target distribution, which enables the model
to obtain a more robust representation. In SAM, we build
a dynamic non-parametric classifier by mixing the histori-
cal prototypes with instance-level prototypes, enabling us
to adjust the semantic prediction. In addition, the non-
parametric classifier can be associated with the parametric
one, which can further benefit the adaptation results. Our
contributions can be summarized as follows:
•Efficiency. We propose a backward-free approach for
TTDA-Seg, which can be implemented within one for-
ward propagation with a light computation cost.
•Effectiveness. We introduce a considerate approach
to adapt the model in both distribution and semantic
aspects. In addition, our method takes the mutual ad-
vantage of two types of classifiers to achieve further
improvements.
•Usability. Our method is easy to implement and is
model-agnostic, which can be readily injected into ex-
isting models (see Fig.2).
•Promising Results. We conduct experiments on three
source domains and five target domains based on driv-
ing benchmarks and show that our method produces
new state-of-the-art performance for TTDA-Seg. We
also study the continual TTDA-Seg and verify the su-
periority of our method in this challenging task.
24091
|
Wang_Clothed_Human_Performance_Capture_With_a_Double-Layer_Neural_Radiance_Fields_CVPR_2023 | Abstract
This paper addresses the challenge of capturing perfor-
mance for the clothed humans from sparse-view or monoc-
ular videos. Previous methods capture the performance offull humans with a personalized template or recover the
garments from a single frame with static human poses.
However , it is inconvenient to extract cloth semantics andcapture clothing motion with one-piece template, while sin-
gle frame-based methods may suffer from instable trackingacross videos. To address these problems, we propose anovel method for human performance capture by trackingclothing and human body motion separately with a double-
layer neural radiance fields (NeRFs). Specifically, we pro-pose a double-layer NeRFs for the body and garments, and
track the densely deforming template of the clothing andbody by jointly optimizing the deformation fields and thecanonical double-layer NeRFs. In the optimization, we in-
troduce a physics-aware cloth simulation network which
can help generate physically plausible cloth dynamics andbody-cloth interactions. Compared with existing method-
s, our method is fully differentiable and can capture both
the body and clothing motion robustly from dynamic videos.Also, our method represents the clothing with an indepen-
dent NeRFs, allowing us to model implicit fields of generalclothes feasibly. The experimental evaluations validate its
effectiveness on real multi-view or monocular videos.
| 1. Introduction
Performance capture for clothed humans is one of the
essential problems in the metaverse, and it not only cap-tures the inner human body motion but also recovers theouter clothing motion which has many promising applica-tions such as virtual try-on, video editing, and telepresence.From sparse-view or monocular videos of a moving human
∗Corresponding author: [email protected] general clothes, its goal is to recover the dynamic 3D
shape sequence of the human body and clothing simultane-ously that are consistent with the observed frames in bothhuman shape and motion. This is a very challenging prob-
lem since the dynamic human could be with arbitrary mo-tions and with complex non-rigid cloth deformations, and
the clothing in motion is difficult to maintain physically
plausible interactions with the body.
Previous systems [ 4,21,38,39] reconstruct 3D clothed
humans by using depth sensors or fitting a personalized
template [ 9,10,37] to the image observations (e.g., body
joints and silhouettes). Only recovering one-piece geome-
try which unifies the human body and clothes, these systemsfail to track the motion of the clothing and achieve clothing
editing on 3D humans, which are the prerequisites in many
VR/AR applications like virtual dressing. On the contrary,
cloth can be extracted and tracked from depth scans [ 26,40]
accurately by fitting pre-built cloth templates to the scanswhich have limited applications when 3D data are unavail-able. Existing garment estimation methods [ 3,12,43,44]
from color images require the person facing the camera and
in static poses. When the human is moving and the cloth-
ing is deforming, these methods may recover the 3D gar-ments unreliably. Recent methods [ 16,31] track body and
clothing motion simultaneously from videos, but they need
to re-build cloth template for a new performer and the run-
ning efficiency is very low due to online cloth simulation or
computationally-exhaustive optimization, which prohibitsthem from being widely deployed for daily applications.
Recent works [ 20,34,41] adopt dynamic human NeRF-
s to capture human motion and obtain impressive tracking
results. By capturing the temporally-varying appearance inthe videos, dynamic NeRFs [ 34] can provide dense photo-
metric constraints to track the deforming geometry of theperformer. However, they represent the human with a sin-
gle NeRFs without modeling cloth, and the clothing motion
cannot be extracted. In this paper, we aim to track the cloth-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21098
ing and body motion simultaneously with dynamic NeRFs.
However, this problem is rather challenging due to two ma-
jor questions we need to solve: how to represent dynamicclothing and human body with NeRFs, and how to capture
clothing and human body motion with plausible body-cloth
interactions based on the implicit representation.
In this paper, we propose a novel method for clothed
human performance capture with a double-layer NeRFs.Specifically, a double-layer NeRFs is modeled for both thebody and clothing in the template space, and transformedto the observation space with the corresponding deforma-tion fields, and the rendered images are then synthesizedby composing the two dynamic NeRFs. We first estimatethe template shape in canonical frame and learn the geom-etry network supervised by the template geometry. In therendering, we compose the double-layer NeRFs with theguidance of deformed body and clothing meshes. Then, byminimizing the difference between the rendered color andobserved color, the deformation fields and the canonical N-eRFs are optimized jointly. The deformation field is rep-resented as the inverse deformation of the template mesh,thus the densely deforming geometry of the template can berecovered simultaneously. In addition, we adopt a physics-
aware network learnt from simulation data between various
cloth types and humans to constrain the dynamic clothing
and preserve physically plausible body-cloth interactions,
resulting in realistic cloth geometry tracking. Compared toprevious methods, our method is fully differentiable and can
recover the realistic motion of both the clothing and body
from dynamic videos with arbitrary human poses and com-plex cloth deformations. The experimental qualitative andquantitative results on datasets of DynaCap [ 8] and Deep-
Cap [ 10] prove that the proposed approach can robustly and
accurately capture the motion for clothed humans. In sum-
mary, the primary contributions of our work include:
•We propose a double-layer NeRFs for dynamic hu-
mans in general clothing, allowing us to model implicit
humans with a variety of clothes (e.g., loose dresses).
•To the best of our knowledge, we propose the first
framework to capture clothing motion separately fromthe human body using the double-layer NeRFs, which
provides dense appearance constraints on the geometrytracking and improves the robustness and accuracy.
•A differentiable physics-aware network is learnt for d-
ifferent common garments and used to preserve physi-
cally plausible cloth deformations in motion capture.
|
Wang_Multi-Modal_Learning_With_Missing_Modality_via_Shared-Specific_Feature_Modelling_CVPR_2023 | Abstract
The missing modality issue is critical but non-trivial to
be solved by multi-modal models. Current methods aim-
ing to handle the missing modality problem in multi-modal
tasks, either deal with missing modalities only during eval-
uation or train separate models to handle specific missing
modality settings. In addition, these models are designed
for specific tasks, so for example, classification models are
not easily adapted to segmentation tasks and vice versa. In
this paper, we propose the Shared-Specific Feature Mod-
elling (ShaSpec) method that is considerably simpler and
more effective than competing approaches that address the
issues above. ShaSpec is designed to take advantage of all
available input modalities during training and evaluation
by learning shared and specific features to better represent
the input data. This is achieved from a strategy that relies
on auxiliary tasks based on distribution alignment and do-
main classification, in addition to a residual feature fusion
procedure. Also, the design simplicity of ShaSpec enables
its easy adaptation to multiple tasks, such as classification
and segmentation. Experiments are conducted on both med-
ical image segmentation and computer vision classification,
with results indicating that ShaSpec outperforms competing
methods by a large margin. For instance, on BraTS2018,
ShaSpec improves the SOTA by more than 3% for enhanc-
ing tumour, 5% for tumour core and 3% for whole tumour.1
| 1. Introduction
Recently, multi-modal learning has attracted much atten-
tion by research and industry communities in both computer
vision and medical image analysis. Audio, images and short
videos are becoming common types of media being used
for multiple types model prediction in many different appli-
1This work received funding from the Australian Government through
the Medical Research Futures Fund: Primary Health Care Research Data
Infrastructure Grant 2020 and from Endometriosis Australia. G.C. was
supported by Australian Research Council through grant FT190100525.cations, such as sound source localisation [6], self-driving
vehicles [32] and vision-and-language applications [28,33].
Similarly, in medical domain, combining different modali-
ties to improve diagnosis accuracy has become increasingly
important [9, 29]. For instance, Magnetic Resonance Imag-
ing (MRI) is a common tool for brain tumour detection,
which does not depend only on one type of MRI image, but
on multiple modalities (i.e. Flair, T1, T1 contrast-enhanced
and T2). However, the multi-modal methods above usu-
ally require the completeness of all modalities for train-
ing and evaluation, limiting their applicability in real-world
with missing-modality challenges when subsets of modali-
ties may be missing during training and testing.
Such challenge has motivated both computer vision [20]
and medical image analysis [5, 8, 13, 25] communities to
study missing-modality multi-modal approaches. Wang et
al. [31] proposed an adversarial co-training network for
missing modality brain tumour segmentation. They specif-
ically introduced a “dedicated” training strategy defined by
a series of independent models that are specifically trained
to each missing situation. Another interesting point about
all previous methods is that they have been specifically de-
veloped either for (computer vision) classification [20] or
(medical imaging) segmentation [5, 8, 13, 25, 31], making
their extension to multiple tasks challenging.
In this paper, we propose a multi-model learning with
missing modality approach, called Shared-Spec ific Feature
Modelling (ShaSpec), which can handle missing modali-
ties in both training and testing, as well as dedicated train-
ing and non-dedicated training2. Also, compared with pre-
viously models, ShaSpec is designed with a considerably
simpler and more effective architecture that explores well-
understood auxiliary tasks (e.g., the distribution alignment
and domain classification of multi-modal features), which
enables ShaSpec to be easily adapted to classification and
segmentation tasks. The main contributions are:
• An extremely simple yet effective multi-modal learn-
2Non-dedicated training refers to train one model to handle different
missing modality combinations.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15878
ing with missing modality method, called Shared-
Specific Feature Modelling (ShaSpec), which is based
on modelling and fusing shared and specific features to
deal with missing modality in training and evaluation
and with dedicated and non-dedicated training;
• To the best of our knowledge, the proposed ShaSpec is
the first missing modality multi-modal approach that
can be easily adapted to both classification and seg-
mentation tasks given the simplicity of its design.
Our results on computer vision classification and medi-
cal imaging segmentation benchmarks show that ShaSpec
achieves state-of-the-art performance. Notably, com-
pared with recently proposed competing approaches on
BraTS2018, our model shows segmentation accuracy im-
provements of more than 3% for enhancing tumour, 5% for
tumour core and 3% for whole tumour.
|
Xiao_DLBD_A_Self-Supervised_Direct-Learned_Binary_Descriptor_CVPR_2023 | Abstract
For learning-based binary descriptors, the binarization
process has not been well addressed. The reason is that
the binarization blocks gradient back-propagation. Existing
learning-based binary descriptors learn real-valued output,
and then it is converted to binary descriptors by their pro-
posed binarization processes. Since their binarization pro-
cesses are not a component of the network, the learning-
based binary descriptor cannot fully utilize the advances of
deep learning. To solve this issue, we propose a model-
agnostic plugin binary transformation layer (BTL), making
the network directly generate binary descriptors. Then, we
present the first self-supervised, direct-learned binary de-
scriptor, dubbed DLBD. Furthermore, we propose ultra-
wide temperature-scaled cross-entropy loss to adjust the
distribution of learned descriptors in a larger range. Ex-
periments demonstrate that the proposed BTL can substi-
tute the previous binarization process. Our proposed DLBD
outperforms SOTA on different tasks such as image retrieval
and classification1.
| 1. Introduction
Feature descriptors have played an essential role in many
computer vision tasks [4, 7, 11], such as visual search, im-
age classification, and panorama stitching. The research
of descriptors is intended to bridge the gap between low-
level pixels and high-level semantic information. With fea-
ture descriptors developing, lightweight binary descriptors
have been proposed to further reduce memory consumption
and computational complexity. The initial research focused
on hand-crafted binary descriptors, such as [1, 18, 20]. Al-
though hand-crafted binary descriptors are effective, their
discriminability and robustness are eager to be improved.
With the advent of Neural Networks, research has gradually
*Corresponding author
1Our code is available at: https://github.com/CQUPT-CV/DLBD
SignSimilarity
Calculation 0.11 -0.09 0.63 0.03 0.10 -0.04 -0.07 0.67-0.22 -0.26 0.57 -0.0 9 -0.07 0.17 0.04 0.720.76
1 -1 1 1 1 -1 -1 1 -1 -1 1 -1 -1 1 1 1-0.25
√ 0.76 -0.24 -0.34 -0.41 -0.52 -0.54
√ 0.75 √ 0.75 √ 0.50 √ 0.50 √ 0.25 -0.25
√ 0.50 √ 0.50 √ 0.50 √ 0.25 -0.75 -1.00
√ 0.50 -0.25 -0.50 -0.50 -0.50 -0.50
Real-valued
Binary
(sign)
Binary
(softsign)
Binary
(BTL)cat
cat
catcatbird
bird
birdbirddog
dog
dog
doghorse
horse
horse
horseshipship
ship
shipdeer
deer
deer
deerTarget
catQuery
cat
√ Similarity
(√- Selected )?Figure 1. An example of the issue in the existing binarization
processes. Images in this example come from CIFAR10, sized
32×32, and their descriptors are represented by 8-bit.
moved away from hand-crafted descriptors to ones utilizing
the powerful learning ability of Neural Networks.
The learning-based binary descriptors can be divided
into supervised learning-based binary descriptors and un-
supervised learning-based ones. The supervised learning-
based binary descriptors [23, 24, 27], such as L2Net [23],
have excellent performance far surpassing hand-crafted bi-
nary descriptors. However, there is a critical issue: the su-
pervised learning-based binary descriptors depend on mas-
sive training data with human-annotated labels, which re-
quires many labor costs. Therefore, researchers present un-
supervised learning-based binary descriptors [8–10, 15, 25,
26, 29].
It is worth noting that the derivative of the binariza-
tion function as a step function at non-zero points is 0,
which cannot back-propagate gradients and then prevents
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15846
the learning-based binary descriptors from directly training
and outputting binary descriptors. For this reason, the exist-
ing learning-based binary descriptors employ separate bi-
narization processes, which convert the real-valued output
of the trained network to the binary descriptors. However,
a |
Xiao_LSTFE-NetLong_Short-Term_Feature_Enhancement_Network_for_Video_Small_Object_Detection_CVPR_2023 | Abstract
Video small object detection is a difficult task due to the
lack of object information. Recent methods focus on adding
more temporal information to obtain more potent high-level
features, which often fail to specify the most vital informa-
tion for small objects, resulting in insufficient or inappro-
priate features. Since information from frames at differ-
ent positions contributes differently to small objects, it is
not ideal to assume that using one universal method will
extract proper features. We find that context information
from the long-term frame and temporal information from
the short-term frame are two useful cues for video small ob-
ject detection. To fully utilize these two cues, we propose a
long short-term feature enhancement network (LSTFE-Net)
for video small object detection. First, we develop a plug-
and-play spatio-temporal feature alignment module to cre-
ate temporal correspondences between the short-term and
current frames. Then, we propose a frame selection mod-
ule to select the long-term frame that can provide the most
additional context information. Finally, we propose a long
short-term feature aggregation module to fuse long short-
term features. Compared to other state-of-the-art meth-
ods, our LSTFE-Net achieves 4.4% absolute boosts in AP
on the FL-Drones dataset. More details can be found at
https://github.com/xiaojs18/LSTFE-Net .
| 1. Introduction
Video small object detection plays an important role
in many fields such as automatic driving, remote sensing,
medical image, and industrial defect detection [26]. How-
ever, it is still a difficult task due to the lack of pixel infor-
mation and the difficulty of feature extraction. Therefore,
the topic of how to enhance the features of small objects
has attracted great attention [1, 7, 16].
*The corresponding author
ST-frame features
Short -term feature
aggregationLong -term feature
aggregationCur-frame feature LT-frame features
Detection
results
Long Short -term Feature Aggregatio nFeature extraction networkInput
Feature
Extraction
LT-frames...... ............
ST-frames Cur-frame
Cur-frame PL feature ST-frame PL feature s LT-frame PL featuresSpatio -temporal feature alignment module Frame selection module
RPN
RoI feature extraction
PL Feature
ExtractionPL Feature
ExtractionFigure 1. The architecture of the proposed LSTFE-Net. The
current frame (Cur-frame), short-term frames (ST-frames) near
the Cur-frame, and long-term frames (LT-frames) sampled from
the whole video first go through the feature extraction network.
Then the Cur-frame feature and ST-frame features are connected
through the spatio-temporal feature alignment module, and the
frame selection module searches the background context of LT-
frame features. After getting the Proposal-Level (PL) features, the
long short-term feature aggregation module finally integrates the
long short-term features into the Cur-frame to make feature en-
hancement. Best viewed in color and zoomed in.
Some recent works have proved that the improvement of
video small object detection performance requires full uti-
lization of information in the temporal dimension. While
detecting small objects in the current frame may suffer from
many problems such as motion blur, low resolution, and
too small size, effective modeling of information from other
frames can help address these problems [2, 4, 9, 17]. There
is a high similarity in nearby frames because of the strong
time continuity between them, so it is natural to empha-
size the importance of short-term frames, which are near
the current frame. According to FGFA [28] and STSN [2],
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14613
short-term frame features can be aligned and aggregated to
provide more useful information for small objects.
Context information is important for small object detec-
tion [16,25]. Because a large number of images are sampled
from the same video for the same object, the background
context information of the object is single. Additionally,
only part of the frames in the video are sampled as current
frames (such as 15 frames) while training, which results in
the lack of real background context information and reduces
the robustness of the training model. Compared to features
in a short range, features from the whole video level can be
more discriminative and robust [21, 23]. And it is noticed
in prior works [9, 23] that more contextual information will
be provided when using long-term frames sampled from the
whole video.
The impacts of short-term and long-term frames in de-
tection have been studied in recent methods. However, these
methods have obvious disadvantages in both efficiency and
accuracy, especially for small objects in videos. Some
methods [22, 27, 28] extract information from the short-
term frame and exploit the flow model to propagate features
across adjacent frames, however, this is expensive because
the flow model is hard to construct and transplant. Other
methods [21, 23] focus on semantic information from long-
term frames and incorporate randomly sampled long-term
frames in detection, which causes uncertainty of detection
performance and the loss of valuable information. Besides,
these methods above cannot figure out the specific informa-
tion that matters most for small objects from frames. Some
methods [21–23, 27, 28] think the information in the video
is single and miss considering distinct information from
different frames, getting inadequate features. Other meth-
ods [3, 5] focus on extracting high-level features from the
video which are not suitable for small objects due to their
special properties.
To better mine information from both short-term frames
and long-term frames, we propose a long short-term fea-
ture enhancement network (LSTFE-Net) for video small
object detection. Specifically, the features of short-term
frames are expected to correspond to the current frame in
a low-cost and effective way, so a spatio-temporal feature
alignment module is designed to propagate features across
nearby frames. Further, in order to increase the benefit of
aligned features while not increasing too much complexity
of the model, a spatio-temporal feature aggregation method
is also added. The context information is expected to be
highlighted from the whole video, prompting a frame se-
lection module to select the long-term frame feature. The
goal is to make effective feature enhancement after the fea-
tures are collected, and the establishment of connections
between different features is enforced. A long short-term
feature aggregation module is devised to aggregate features
from the current frame, the short-term frames, and the long-term frames by stages. The performance of the proposed
method is evaluated on the open dataset, and experiment re-
sults demonstrate that our method has obvious advantages
in video small object detection. The architecture of the net-
work is shown in Fig. 1.
Our main contributions are summarized as follows:
(1) An LSTFE-Net is proposed to effectively enhance
small object features and improve detection performance.
(2) A plug-and-play spatio-temporal feature alignment
module is designed for aligning features across adjacent
frames. A flexible way to make Pixel-Level feature en-
hancement for small objects using aligned features is also
explored. Combined with the Proposal-Level feature en-
hancement, this module achieves multi-level enhancement
to improve the feature expressiveness. The whole module
is easy to transplant and proved to be effective, which re-
veals its potential ability to benefit most of the works.
(3) A frame selection module is proposed to ensure the
utilization of high-value input data, and it selects the long-
term frame feature with the most context information. This
module reinforces the network to automatically look for
useful information for small objects, improving its stabil-
ity and performance in video small object detection.
(4) To effectively integrate the long-term frame features
and short-term frame features into the current frame, a long
short-term feature aggregation module is proposed to aggre-
gate different features in different stages. This enables the
relations between Proposal-Level features to be built adap-
tively based on an attention mechanism, which also means
our feature enhancement for small objects can be accom-
plished in a general and limitless way.
|