title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Zhou_STAR_Loss_Reducing_Semantic_Ambiguity_in_Facial_Landmark_Detection_CVPR_2023 | Abstract
Recently, deep learning-based facial landmark detection
has achieved significant improvement. However, the se-
mantic ambiguity problem degrades detection performance.
Specifically, the semantic ambiguity causes inconsistent an-
notation and negatively affects the model’s convergence,
leading to worse accuracy and instability prediction. Tosolve this problem, we propose a Self-adapTive Ambiguity
Reduction (STAR) loss by exploiting the properties of se-
mantic ambiguity. We find that semantic ambiguity results
in the anisotropic predicted distribution, which inspires us
to use predicted distribution to represent semantic ambi-guity. Based on this, we design the STAR loss that mea-sures the anisotropism of the predicted distribution. Com-
pared with the standard regression loss, STAR loss is en-
couraged to be small when the predicted distribution is
anisotropic and thus adaptively mitigates the impact of se-
mantic ambiguity. Moreover, we propose two kinds of eigen-value restriction methods that could avoid both distribu-
tion’s abnormal change and the model’s premature con-
vergence. Finally, the comprehensive experiments demon-
strate that STAR loss outperforms the state-of-the-art meth-
ods on three benchmarks, i.e.,COFW, 300W, and WFLW,
with negligible computation overhead. Code is at https:
//github.com/ZhenglinZhou/STAR
| 1. Introduction
Facial landmark detection, which aims to locate a group
of pre-defined facial landmarks from images [48, 51,56], is
a fundamental problem for many downstream tasks, includ-
ing face verification [12], face synthetic [1], and 3D face
reconstruction [10, 13,17,46].
Thanks to the development of Convolutional Neural Net-
works (CNNs) [20, 36,39], facial landmark detection has
†Corresponding author.
⇤Equal contribution. This work was done when Zhenglin Zhou was an
intern at Tencent PCG.
Figure 1. The impact of semantic ambiguity. We visualize the
outputs of five models trained with the same architecture under thesame experimental setting. (1) The first row shows the predictedfacial landmarks for Mr. Tony Stark, marked as red points. Andthe green point refers to the corresponding mean value. (2) Thesecond row shows the results of predicted probability distribution(i.e., heatmap) from one of the trained models.
improved significantly. At first, coordinate regression meth-
ods [3, 16,33,47] are proposed to learn the transforma-
tion between CNN features and landmark locations via fully
connected layers. Recently, the research focus has been the
heatmap regression methods, which have shown superiorityover coordinate regression methods. The heatmap regres-
sion methods [21, 24,48] predict an intermediate heatmap
for each landmark and decode the coordinates from the
heatmap. But, the commonly used decoder, Argmax [55], is
not differentiable and suffers from quantization error. Re-cently, some solutions have been proposed [38, 53], and
the focus is on the differentiable expectation decoder: soft-
Argmax [32]. With the help of soft-Argmax, the heatmap
regression method has the advantage of end-to-end train-
ing. So the training loss is mainly composed of a regressionloss (such as L
2), which makes the model prediction fit the
manual annotation.
However, the manual annotation suffers from the seman-
tic ambiguity problem [16, 21,30]. Specifically, some fa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15475
Figure 2. The overview of our framework. We use a four stacked Hourglasses (HGs) Network. To mitigate the impact of semantic
ambiguity, the STAR loss is applied to each HG module. (Best view in color.)
cial landmarks, especially landmarks located on face con-
tour, do not have a clear and accurate definition. For exam-
ple, the contour landmarks are defined to evenly distribute
around the face contour without a clear definition of the po-
sitions [ 30]. It makes human annotators confused about the
position, and it is inevitable to induce inconsistent and im-
precise annotations. Thus, we argue that the regression loss
will be misled by ambiguous annotations and degrade the
model’s convergence and performance. As shown in Figure
1, training the neural network with ambiguous annotations
makes the predictions for facial contour landmarks unstable
and inaccurate, which will hurt the downstream task [ 15].
The key problem is to design a new regression loss that mit-
igates the impact of semantic ambiguity.
In this paper, we propose a novel self-adaptive ambiguity
reduction method, STAR loss, by fully exploiting semantic
ambiguity. To this end, we explore the impact of seman-
tic ambiguity on the heatmap. Typically, the distribution
normalization loss forces the predicted probability distri-
bution to resemble an isotropic Gaussian distribution [ 32].
However, as shown in Figure 1, compared with the isotropic
distribution of the eye corner point, the predicted distribu-
tion of the facial contour point is anisotropic. The main
difference between the two landmarks is semantic ambigu-
ity, which is more severe in the contour point. We infer that
the semantic ambiguity is related to the anisotropic distribu-
tion. When the predicted distribution of one facial landmark
is anisotropic, this facial landmark has severe semantic am-
biguity (leading to model unconvergence), so it is necessary
to reduce its impact.
To this end, we begin our story by introducing a cus-
tomized principal component analysis (PCA) that can pro-
cess the discrete probability distribution, which contains
three steps: weighted mean estimation, unbiased weighted
covariance estimation, and eigen-decomposition. We de-compose a group of predicted distributions and visualize the
corresponding principal components. The visualization re-
sults show that the first principal component is along with
the face contour. Meanwhile, the ambiguous direction for
contour landmarks is aligned with the face contour. There-
fore, we infer that their first principal component direction
is highly consistent with their ambiguity direction.
According to this new observation, we design our STAR
loss, which decomposes the prediction error into two princi-
pal component directions and divides it by the correspond-
ing energy value. For a facial landmark with anisotropic
predicted distribution, the energy of the first principal com-
ponent is higher than the second. In this way, the error in the
first principal component direction can be adaptively sup-
pressed, thereby alleviating the impact of ambiguity anno-
tation on training. However, we find this initial version of
STAR loss suffers an abnormal energy increase, leading to
premature convergence. To solve this problem, we find that
the anomaly results from that the model tends to increase
the energy to minimize STAR loss. As a result, we propose
two kinds of eigenvalue restriction methods to avoid STAR
loss decreasing abnormally.
We evaluate the STAR loss on three widely-used bench-
marks, i.e.,COFW [ 5], 300W [ 35] and WFLW [ 48]. Ex-
periments show that STAR loss indeed helps deep models
achieve competitive performance compared to state-of-the-
art methods. Code will be released for reproduction.
|
Zhang_WeatherStream_Light_Transport_Automation_of_Single_Image_Deweathering_CVPR_2023 | Abstract
Today single image deweathering is arguably more sen-
sitive to the dataset type, rather than the model. We intro-
duce WeatherStream, an automatic pipeline capturing all
real-world weather effects (rain, snow, and rain fog degra-
dations), along with their clean image pairs. Previous state-
of-the-art methods that have attempted the all-weather re-
moval task train on synthetic pairs, and are thus limited by
the Sim2Real domain gap. Recent work has attempted to
manually collect time multiplexed pairs, but the use of hu-
man labor limits the scale of such a dataset. We introduce
a pipeline that uses the power of light-transport physics
and a model trained on a small, initial seed dataset to re-
ject approximately 99.6% of unwanted scenes. The pipeline
is able to generalize to new scenes and degradations that
can, in turn, be used to train existing models just like
fully human-labeled data. Training on a dataset collected
through this procedure leads to significant improvements
on multiple existing weather removal methods on a care-
fully human-collected test set of real-world weather effects.
The dataset and code can be found in the following website:
http://visual.ee.ucla.edu/wstream.htm/ .
| 1. Introduction
Single-image deweathering aims to remove image degra-
dations caused by rain, fog, or snow. Single-image
*Equal contribution.deweathering is a mainstay of modern computer vision, val-
ued for the aesthetic appeal of removing weather degrada-
tions, as well as the ability to reuse pre-trained computer vi-
sion models, which work on clear weather conditions. Un-
fortunately, the field is dataset bottlenecked. State-of-the-
art techniques use deep networks, but suffer from a common
issue: the same scene cannot be observed at the same time,
with and without weather artifacts. Therefore, it is not pos-
sible to train deep networks on ideal pairs , a pair of clean
and degraded images of the same scene at the same time.
Previous work has attempted to solve the dataset bottle-
neck by using simulated pairs . A simulated pair is formed
by starting with a clean image of a scene and artificially
adding weather degradations. For example, one could care-
fully simulate the effect of raindrop streaks on a clean im-
age. Unfortunately, simulating the diverse weather condi-
tions that one can encounter is a very difficult path. Exist-
ing simulators for rain are difficult to generalize, and scaling
simulators for rain, fog, and snow poses a further challenge.
Nonetheless, simulated pairs have been the most common
approach, and researchers have accepted the generalization
errors that are encountered. Another emerging way to ob-
tain pairs is to use psuedo-real pairs . A pseudo-real pair
is a pair of clean and degraded images that is formed with-
out the use of simulators. One way of doing so is to use
a video-based deraining method to remove rain (which is
dynamic) from a scene [60]. This form of ground-truthing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13499
assumes video-based deraining is itself a solved problem,
which leads to limited performance, particularly in rain ef-
fects that are less dynamic (such as far-field veiling).
Perhaps the highest-quality pairs were obtained in the
most recent work known as GT-RAIN [2]. This paper took a
different tack, introducing time multiplexed pairs . A time
multiplexed pair is obtained by taking an input video se-
quence and grabbing closely spaced frames in the video,
with and without rain. This approach only works if nearby
frames are grabbed in a magic moment when the scene
conditions are just right, e.g., the rain is on the cusp of stop-
ping, illumination constancy is observed, limited dynamic
agents, and so on. In the less than one percent of videos
that have suitable conditions, a time-multiplexed pair per-
forms almost like real, ground truth.
Unfortunately, approaching time multiplexed pairing us-
ing human annotation (as has been done in previous work)
is hard to scale to 100K+ pairs. As a generous lower bound,
it would take 1 human labeler, 1 minute to carefully parse
through 5 video sequences. Scaling this up to 2 million
videos would take a whole year. Moreover, 99.6% of the
video sequences do not meet the criteria for a magic mo-
ment. A further limit to scalability is that human observers
must be highly trained to control for factors such as illumi-
nation shifts, weather API errors, and dynamic objects that
are prevalent in over 99% of the videos.
In this paper, we formulate light transport techniques,
which model the flow of light in a scene [29] to help us
decide if frames should be included in training data. A key
contribution is to formulate four principles of light transport
to decide if a time-multiplexed pair is valid: (1) Background
Conformity; (2) Particle Chromatic Variation; (3) Scatter-
dependent Blur; and (4) Illumination Consistency.
Contributions: Our work is an initial attempt to use
light transport to formulate how time multiplexed pairs
should be selected, while the only previous approach is hu-
man annotated and limited to rain [2]. Automation scales
the dataset, enabling us to obtain a dataset of 188K im-
age pairs. This is the largest all-weather removal dataset to
date, and includes diverse rain and snow of different shapes,
sizes, and strengths, in various locations around the globe,
with a plethora of backgrounds, camera settings, and illumi-
nations. For this reason, we observe a 1.5 dB improvement
in performance across various state of the art baselines. The
dataset will be released conditional on acceptance.
|
Zhu_Learning_Weather-General_and_Weather-Specific_Features_for_Image_Restoration_Under_Multiple_CVPR_2023 | Abstract
Image restoration under multiple adverse weather con-
ditions aims to remove weather-related artifacts by using a
single set of network parameters. In this paper, we find that
image degradations under different weather conditions con-
tain general characteristics as well as their specific charac-
teristics. Inspired by this observation, we design an efficient
unified framework with a two-stage training strategy to ex-
plore the weather-general and weather-specific features.
The first training stage aims to learn the weather-general
features by taking the images under various weather con-
ditions as inputs and outputting the coarsely restored re-
sults. The second training stage aims to learn to adaptively
expand the specific parameters for each weather type in
the deep model, where the requisite positions for expand-
ing weather-specific parameters are automatically learned.
Hence, we can obtain an efficient and unified model for im-
age restoration under multiple adverse weather conditions.
Moreover, we build the first real-world benchmark dataset
with multiple weather conditions to better deal with real-
world weather scenarios. Experimental results show that
our method achieves superior performance on all the syn-
thetic and real-world benchmarks. Codes and datasets are
available at this repository.
| 1. Introduction
Adverse weather conditions, such as rain, haze, and
snow, are common climatic phenomena in our daily life.
They often lead to the poor visual quality of captured im-
ages and primarily deteriorate the performance of many
outdoor vision systems, such as outdoor security cameras
⋆: This work was done during their internship at Shanghai Artificial Intel-
ligence Laboratory.
†: Co-first authors contributed equally.
: Corresponding authors.
La y er 1
La y er 2La y er 3(a)
La y er 1
La y er 2La y er 3(b )
( c)
La y er 1
La y er 2La y er 3( d)
: Specific P ar amet ers f or Der aining: Specific P ar amet ers f or Desno wing: Specific P ar amet ers f or Dehazing: Gener al P ar amet ers f or All W eat herNULLNULL
NULL: No P ar amet ers at This La y erLa y er 1La y er 2La y er 3
: Con v olution La y erNULLHaz eHaz e
Haz eHaz eRainRain
RainRainSno wSno w
Sno wSno wFigure 1. Illustration of the proposed method and the currently ex-
isting solutions. (a) The weather-specific methods; (b) the method
of [41]; (c) methods of [6, 69]; (d) our method, which learns the
weather-specific and weather-general features in an efficient man-
ner to remove multiple weather-related artifacts.
and automatic driving [53, 107]. To make these systems
more robust to various adverse weather conditions, many
restoration solutions have been proposed, such as deraining
[16,17,25,40,72,73,82,83], dehazing [1,23,49,65,80,84],
desnowing [4, 51, 97], and raindrop removal [22, 59, 96].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21747
Albeit these approaches exhibit promising performance
in the given weather situation, they are only applicable
to certain typical weather scenarios. However, it is in-
evitable to tackle various kinds of weather in the applica-
tion of outdoor vision intelligent systems. Consequently,
as shown in Fig. 1 (a), multiple sets of weather-specific
model parameters are required to deal with various condi-
tions, which brings additional computational and storage
burdens. Hence, it is vitally requisite to develop a uni-
fied model capable of addressing various types of adverse
weather conditions via a single set of network parameters.
Recently, several methods [6, 41, 69] adopt a single set
of network parameters to remove different weather-related
artifacts. However, these solutions contain limitations for
practical deployment and applications. Firstly, some meth-
ods [6,69] fail to consider the specific characteristics of each
weather condition in their proposed unified models, limit-
ing their restoration performance on specific weather con-
ditions. Secondly, as shown in Fig. 1 (b), although Li et
al. [41] tackle the differences and similarities of weather
degradation with multiple individual encoders and a shared
decoder. Such multiple fixed encoders may largely in-
crease network parameters. Thirdly, existing unified mod-
els [6, 41, 69] often require a large number of parameters,
limiting the model efficiency. Lastly, current state-of-the-
art methods [6, 41, 69] mainly employ synthesized datasets
in their training phase, causing apparent performance drops
in real-world scenarios.
In this paper, we argue that images with different weather
distortions contain general characteristics as well as their
specific characteristics. According to the atmosphere scat-
tering model [55, 57], due to the attenuation and scattering
effects, these weather disturbances often share some sim-
ilar visual degradation appearances, e.g., low contrast and
color degradation. Meanwhile, the typical type of weather
distortion has its unique characteristics. For example, rainy
images often suffer from occlusion by rain steaks with dif-
ferent shapes and scales [32, 82]; haze exhibits global dis-
tortions on the entire images [34, 65]. Pioneer works also
have devised many specific priors [23,72,80,83,97] for dif-
ferent weather conditions, which motivates us to explore
weather-general andweather-specific features to perform
image restoration under multiple weather conditions.
To achieve this, we design an efficient unified frame-
work for multiple adverse weather-related artifacts removal
by exploring both weather-general and weather-specific fea-
tures. The training procedure of our framework consists
of two stages. The first training stage aims to learn the
general features by taking various images under different
weather conditions as the inputs and outputting coarse re-
sults for multiple weather conditions. In the second train-
ing stage, we devise a regularization-based optimization
scheme, which learns to adaptively expand the specific pa-rameters for each weather type in the deep model. Note that
these requisite positions to expand weather-specific parame-
ters could be learned automatically, thus avoiding redundant
parameters pre-designed by researchers. Hence, we are able
to obtain an efficient and unified model for image restora-
tion under multiple adverse weather conditions. Further-
more, we newly construct the first real-world benchmark
dataset with multiple weather conditions to better deal with
various weather-related artifacts in real-world scenarios.
The contributions of this paper could be summarized as:
• We reveal that image degradations under different
weather conditions contain both general and specific
characteristics, which motivates us to design a uni-
fied deep model by exploring the weather-general and
weather-specific features for removing weather-related
artifacts under multiple weather conditions.
• We present a two-stage training strategy to learn the
weather-general and weather-specific features auto-
matically. Moreover, the weather-specific features are
adaptively added at the learned positions, which makes
our model efficient and effective.
• In order to better deal with real-world weather con-
ditions, we construct the first real-world benchmark
dataset with multiple weather conditions. Addition-
ally, experimental results validate the superiority of our
proposed method on various benchmarks.
|
Zheng_CAMS_CAnonicalized_Manipulation_Spaces_for_Category-Level_Functional_Hand-Object_Manipulation_Synthesis_CVPR_2023 | Abstract
In this work, we focus on a novel task of category-
level functional hand-object manipulation synthesis cover-
ing both rigid and articulated object categories. Given an
object geometry, an initial human hand pose as well as a
sparse control sequence of object poses, our goal is to gen-
erate a physically reasonable hand-object manipulation se-
quence that performs like human beings. To address such
a challenge, we first design CAnonicalized Manipulation
Spaces (CAMS), a two-level space hierarchy that canon-
icalizes the hand poses in an object-centric and contact-
centric view. Benefiting from the representation capability
of CAMS, we then present a two-stage framework for syn-
thesizing human-like manipulation animations. Our frame-
work achieves state-of-the-art performance for both rigid
and articulated categories with impressive visual effects.
Codes and video results can be found at our project home-
page: https://cams-hoi.github.io/ .
*Equal contribution with the order determined by rolling dice.
†Corresponding author. | 1. Introduction
Human conducts hand-object manipulation (HOM) for
certain functional purposes commonly in daily life, e.g.
opening a laptop and using scissors to cut. Understand-
ing how such manipulation happens and being able to
synthesize realistic hand-object manipulation has naturally
become a key problem in computer vision. A genera-
tive model that can synthesize human-like functional hand-
object manipulation plays an essential role in various ap-
plications, including video games, virtual reality, dexterous
robotic manipulation, and human-robot interaction.
This problem has only been studied with a very limited
scope previously. Most existing works focus on the synthe-
sis of a static grasp either with [3] or without [17] a func-
tional goal. Recently, there have been works started focus-
ing on dynamic manipulation synthesis [7, 41]. However,
these works restrict their scope to rigid objects and do not
consider the fact that functional manipulation might change
the object geometry as well, such as in opening a laptop by
hand. Moreover, these works usually require a strong input,
including hand and object trajectories or a grasp reference,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
585
limiting their application scenarios.
To expand the scope of HOM synthesis, we propose a
new task of category-level functional hand-object manip-
ulation synthesis. Given a 3D shape from a known cate-
gory as well as a sequence of functional goals, our task is to
synthesize human-like and physically realistic hand-object
manipulation to sequentially realize the goals as shown in
Figure 1. Besides rigid objects, we also consider articulated
objects, which support richer manipulations than a simple
move. We represent a functional goal as a 6D pose for each
rigid part of the object. We emphasize category-level for
generalization to unseen geometry and for more human-like
manipulations revealing the underlying semantics.
In this work, we choose to tackle the above task with
a learning approach. We can learn from human demon-
strations for HOM synthesis thanks to the recent ef-
fort in capturing category-level human-object manipulation
dataset [25]. The key challenges lie in three aspects. First, a
synthesizer needs to generalize to a diverse set of geometry
with complex kinematic structures. Second, humans can
interact with an object in diverse ways. Faithfully captur-
ing such distribution and synthesizing in a similar manner
is difficult. Third, physically realistic synthesis requires un-
derstanding the complex dynamics between the hand and
the object. Such understanding makes sure that the syn-
thesized hand motion indeed drives the object state change
without violating basic physical rules.
To address the above challenges, we choose to gener-
ate object motion through motion planning and learn a neu-
ral synthesizer to generate dynamic hand motion accord-
ingly. Our key idea is to canonicalize the hand pose in an
object-centric and contact-centric view so that the neural
synthesizer only needs to capture a compact distribution.
This idea comes from the following key observations. Dur-
ing functional hand-object manipulation, human hands usu-
ally possess a strong preference for the contact regions, and
such preference is highly correlated to the object geometry,
e.g. hand grasping the display edge while opening a laptop.
From the contact point’s perspective, the finger pose also
lies in a low-dimensional space. Representing hand poses
from an object-centric view as a set of contact points and
from a contact-centric view as a set of local finger embed-
dings could greatly reduce the learning complexity.
Specifically, given an input object plus several functional
goals, we first interpolate per-part object poses between ev-
ery adjacent functional goal, resulting in an object motion
trajectory. Then we take a two-stage method to synthesize
the corresponding hand motion. In the first stage, we intro-
duce CAnonicalized Manipulation Spaces (CAMS) to plan
the hand motion. CAMS is defined as a two-level space
hierarchy. At the root level, all corresponding parts from
the category of interest are scale-normalized and consis-
tently oriented so that the distribution of possible contactpoints becomes concentrated. At the leaf level, each contact
point would define a local frame. This local frame would
simplify the distribution of the corresponding finger pose.
With CAMS, we could represent a hand pose as an object-
centric and contact-centric CAMS embedding. At the core
of our method is a conditional variation auto-encoder, which
learns to predict a CAMS embedding sequence given an ob-
ject motion trajectory. In the second stage, we introduce a
contact- and penetration-aware motion synthesizer to fur-
ther synthesize an object motion-compatible hand motion
given the CAMS embedding sequence.
To summarize, our main contributions include: i) A new
task of functional category-level hand-object manipulation
synthesis. ii) CAMS, a hierarchy of spaces canonicalizing
category-level HOM enabling manipulation synthesis for
unseen objects. iii) A two-stage motion synthesis method
to synthesize human-like and physically realistic HOM. iv)
State-of-the-art HOM synthesis results for both articulated
and rigid object categories.
|
Zhou_UniDistill_A_Universal_Cross-Modality_Knowledge_Distillation_Framework_for_3D_Object_CVPR_2023 | Abstract
In the field of 3D object detection for autonomous
driving, the sensor portfolio including multi-modality and
single-modality is diverse and complex. Since the multi-
modal methods have system complexity while the accuracy
of single-modal ones is relatively low, how to make a trade-
off between them is difficult. In this work, we propose a
universal cross-modality knowledge distillation framework
(UniDistill) to improve the performance of single-modality
detectors. Specifically, during training, UniDistill projects
the features of both the teacher and the student detector into
Bird’s-Eye-View (BEV), which is a friendly representation
for different modalities. Then, three distillation losses are
calculated to sparsely align the foreground features, help-
ing the student learn from the teacher without introducing
additional cost during inference. Taking advantage of the
similar detection paradigm of different detectors in BEV ,
UniDistill easily supports LiDAR-to-camera, camera-to-
LiDAR, fusion-to-LiDAR and fusion-to-camera distillation
paths. Furthermore, the three distillation losses can filter
the effect of misaligned background information and bal-
ance between objects of different sizes, improving the dis-
tillation effectiveness. Extensive experiments on nuScenes
demonstrate that UniDistill effectively improves the mAP
and NDS of student detectors by 2.0% ∼3.2%.
| 1. Introduction
3D object detection plays a critical role in autonomous
driving and robotic navigation. Generally, the popular
3D detectors can be categorized into (1) single-modality
detectors that are based on LiDAR [18, 33, 34, 42, 43]
or camera [1, 13, 20, 24] and (2) multi-modality detec-
tors [22, 30, 36, 37] that are based on both modalities. By
fusing the complementary knowledge of two modalities,
*Equal Contribution
†Corresponding Author
(a)
(c)
Data
Flow Knowledge
Flow
Fusion
B Backbone
D Detector
(b)
TData
Transform
()
DLiDAR Label
D
D
LiDAR
D
B
BCamera
T
()
Camera LiDAR
B B
BEV Feature BEV Feature
e B
e B
D D
D
((((a))))
LiDAR
Camera
D
BCamera
D
B
T
B
(b)Figure 1. Illustration of our proposed UniDistill. The characters in
green and blue represent the data process of camera and LiDAR re-
spectively. (a) and (b) show the procedure of two previous knowl-
edge distillation methods, where the modalities of the teacher and
the student are restricted. By contrast, our proposed UniDistill in
(c) supports four distillation paths.
multi-modality detectors outperform their single-modality
counterparts. Nevertheless, simultaneously processing the
data of two modalities unavoidably introduces extra net-
work designs and computational overhead. Worse still, the
breakdown of any modality directly fails the detection, hin-
dering the application of these detectors.
As a solution, some recent works introduced knowledge
distillation to transfer complementary knowledge of other
modalities to a single-modality detector. In [6,15,46], as il-
lustrated in Figure 1(a) and 1(b), for a single-modality stu-
dent detector, the authors first performed data transforma-
tion of different modalities to train a structurally identical
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5116
teacher. The teacher was then leveraged to transfer knowl-
edge by instructing the student to produce similar features
and prediction results. In this way, the single-modality stu-
dent obtains multi-modality knowledge and improves per-
formance, without additional cost during inference.
Despite their effectiveness to transfer cross-modality
knowledge, the application of existing methods is limited
since the modalities of both the teacher and the student are
restricted. In [6], the modalities of the teacher and student
are fixed to be LiDAR and camera while in [15,46], they are
determined to be LiDAR-camera and LiDAR. However, the
sensor portfolio in the field of 3D object detection results
in a diverse and complex application of different detectors.
With restricted modalities of both the teacher and student,
these methods are difficult to be applied in more situations,
e.g., the method in [6] is not suitable to transfer knowledge
from a camera based teacher to a LiDAR based student.
To solve the above problems, we propose a universal
cross-modality knowledge distillation framework (UniDis-
till) that helps single-modality detectors improve perfor-
mance. Our motivation is based on the observation that the
detectors of different modalities adopt a similar detection
paradigm in bird’s-eye view (BEV), where after transform-
ing the low-level features to BEV , a BEV encoder follows
to further encode high-level features and a detection head
produces response features to perform final prediction.
UniDistill takes advantage of the similarity to construct
the universal knowledge distillation framework. As in Fig-
ure 1(c), during training, UniDistill projects the features of
both the teacher and the student detector into the unified
BEV domain. Then for each ground truth bounding box,
three distillation losses are calculated to transfer knowl-
edge: (1) A feature distillation loss that transfers the seman-
tic knowledge by aligning the low-level features of 9 cru-
cial points. (2) A relation distillation loss that transfers the
structural knowledge by aligning the relationship between
the high-level features of 9 crucial points. (3) A response
distillation loss that closes the prediction gap by aligning
the response features in a Gaussian-like mask. Since the
aligned features are commonly produced by different detec-
tors, UniDistill easily supports LiDAR-to-camera, camera-
to-LiDAR, fusion-to-LiDAR and fusion-to-camera distilla-
tion paths. Furthermore, the three losses sparsely align the
foreground features to filter the effect of misaligned back-
ground information and balance between objects of differ-
ent scales, improving the distillation effectiveness.
In summary, our contributions are three-fold:
• We propose a universal cross-modality knowledge dis-
tillation framework (UniDistill) in the friendly BEV
domain for single-modality 3D object detectors. With
the transferred knowledge of different modalities, the
performance of single-modality detectors is improved
without additional cost during inference.• Benefiting from the similar detection paradigm in
BEV , UniDistill supports LiDAR-to-camera, camera-
to-LiDAR, fusion-to-LiDAR and fusion-to-camera
distillation paths. Moreover, three distillation losses
are designed to sparsely align foreground features, fil-
tering the effect of background information misalign-
ment and balance between objects of different sizes.
• Extensive experiments on nuScenes demonstrate that
UniDistill can effectively improve the mAP and NDS
of student detectors by 2.0% ∼3.2%.
|
Zohar_PROB_Probabilistic_Objectness_for_Open_World_Object_Detection_CVPR_2023 | Abstract
Open World Object Detection (OWOD) is a new and
challenging computer vision task that bridges the gap be-
tween classic object detection (OD) benchmarks and ob-
ject detection in the real world. In addition to detecting
and classifying seen/labeled objects, OWOD algorithms are
expected to detect novel/unknown objects - which can be
classified and incrementally learned. In standard OD, ob-
ject proposals not overlapping with a labeled object are
automatically classified as background. Therefore, simply
applying OD methods to OWOD fails as unknown objects
would be predicted as background. The challenge of detect-
ing unknown objects stems from the lack of supervision in
distinguishing unknown objects and background object pro-
posals. Previous OWOD methods have attempted to over-
come this issue by generating supervision using pseudo-
labeling - however, unknown object detection has remained
low. Probabilistic/generative models may provide a solu-
tion for this challenge. Herein, we introduce a novel prob-
abilistic framework for objectness estimation, where we al-
ternate between probability distribution estimation and ob-
jectness likelihood maximization of known objects in the
embedded feature space - ultimately allowing us to estimate
the objectness probability of different proposals. The result-
ingProbabilistic Objectness transformer-based open-world
detector, PROB, integrates our framework into traditional
object detection models, adapting them for the open-world
setting. Comprehensive experiments on OWOD benchmarks
show that PROB outperforms all existing OWOD methods
in both unknown object detection ( 2unknown recall)
and known object detection ( 10% mAP). Our code is
available at https://github.com/orrzohar/PROB.
| 1. Introduction
Object detection (OD) is a fundamental computer vi-
sion task that has a myriad of real-world applications, from
autonomous driving [18, 25], robotics [4, 32] to health-
care [6, 12]. However, like many other machine learning
systems, generalization beyond the training distribution re-
Figure 1. Comparison of PROB with other open world object de-
tectors. (a) Query embeddings are extracted from an image via
the deformable DETR model. (b) other open-world detectors at-
tempt to directly distinguish between unlabeled ‘hidden’ objects
and background without supervision (red). (c) PROB’s scheme of
probabilistic objectness training and revised inference, which per-
forms alternating optimization of (i) Embeddings distribution es-
timation and (ii) likelihood maximization of embeddings that rep-
resent known objects. (d) Qualitative examples of the improved
unknown object detection of PROB on the MS-COCO test set.
mains challenging [5] and limits the applicability of exist-
ing OD systems. To facilitate the development of machine
learning methods that maintain their robustness in the real
world, a new paradigm of learning was developed – Open
World Learning (OWL) [8–10, 16, 17, 21, 27, 29–31, 34]. In
OWL, a machine learning system is tasked with reason-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11444
ing about both known and unknown concepts, while slowly
learning over time from a non-stationary data stream. In
Open World Object Detection (OWOD), a model is ex-
pected to detect all previously learned objects while simul-
taneously being capable of detecting novel unknown ob-
jects. These flagged unknown objects can be sent to an
oracle (human annotator), which labels the objects of in-
terest. The model is then expected to update itself without
catastrophically forgetting previous object classes [10].
While unknown object detection is pivotal to the OWOD
objective, existing OWOD methods have very low unknown
object recall (10%) [8, 10, 30, 34]. As such, it is clear that
the field has much to improve to meet its actual goal. The
difficulty of unknown object detection stems from a lack of
supervision as, unlike known objects, unknown objects are
not labeled. Hence, while training OD models, object pro-
posals that include an unknown object would be incorrectly
penalized as background. Thus far, most OWOD methods
have attempted to overcome this challenge by using differ-
ent heuristics to differentiate between unknown objects and
background during training. For example, OW-DETR [8]
uses a pseudo-labeling scheme where image patches with
high backbone feature activation are determined to be un-
known objects, and these pseudo-labels are used to super-
vise the OD model. In contrast, instead of reasoning about
known and unknown objects separately using labels and
pseudo-labels, we take a more direct approach. We aim
to learn a probabilistic model for general “objectness” (see
Fig. 1). Any object – both known and unknown – should
have general features that distinguish them from the back-
ground, and the learned objectness can help improve both
unknown and known object detection.
Herein, we introduce the Probabilistic Objectness Open
World Detection Transformer, PROB. PROB incorporates
a novel probabilistic objectness head into the standard de-
formable DETR (D-DETR) model. During training, we al-
ternate between estimating the objectness probability dis-
tribution and maximizing the likelihood of known objects.
Unlike a classification head, this approach does not re-
quire negative examples and therefore does not suffer from
the confusion of background and unknown objects. Dur-
ing inference, we use the estimated objectness distribution
to estimate the likelihood that each object proposal is in-
deed an object (see Fig. 1). The resulting model is simple
and achieves state-of-the-art open-world performance. We
summarize our contributions as follows:
We introduce PROB - a novel OWOD method. PROB
incorporates a probabilistic objectness prediction head
that is jointly optimized as a density model of the im-
age features along with the rest of the transformer net-
work. We utilize the objectness head to improve both
critical components of OWOD: unknown object detec-
tion and incremental learning.We show extensive experiments on all OWOD bench-
marks demonstrating the PROB’s capabilities, which
outperform all existing OWOD models. On MS-
COCO, PROB achieves relative gains of 100-300%
in terms of unknown recall over all existing OWOD
methods while improving known object detection per-
formance10% across all tasks.
We show separate experiments for incremental learn-
ing tasks where PROB outperformed both OWOD
baselines and baseline incremental learning methods.
|
Zhu_NerVE_Neural_Volumetric_Edges_for_Parametric_Curve_Extraction_From_Point_CVPR_2023 | Abstract
Extracting parametric edge curves from point clouds is
a fundamental problem in 3D vision and geometry process-
ing. Existing approaches mainly rely on keypoint detection,
a challenging procedure that tends to generate noisy out-
put, making the subsequent edge extraction error-prone. To
address this issue, we propose to directly detect structured
edges to circumvent the limitations of the previous point-wise
methods. We achieve this goal by presenting NerVE, a novel
neural volumetric edge representation that can be easily
learned through a volumetric learning framework. NerVE
can be seamlessly converted to a versatile piece-wise lin-
ear (PWL) curve representation, enabling a unified strategy
for learning all types of free-form curves. Furthermore, as
NerVE encodes rich structural information, we show that
edge extraction based on NerVE can be reduced to a simple
graph search problem. After converting NerVE to the PWL
representation, parametric curves can be obtained via off-
the-shelf spline fitting algorithms. We evaluate our method
on the challenging ABC dataset [19]. We show that a sim-
ple network based on NerVE can already outperform the
previous state-of-the-art methods by a great margin.
| 1. Introduction
The advances of 3D scanning techniques have enabled us
to digitize and reconstruct the physical world, benefiting a
wide range of applications including 3D modeling, industrial
design, robotic vision, etc. However, point clouds, the raw
output of a 3D scanner, are typically noisy, unstructured, and
can exhibit strong sampling bias. Hence, extracting struc-
tured features, such as the feature edges, from an unordered
point cloud is a vital geometry processing task. Sharp ge-
ometric edges can be used as an abstraction of a complex
3D shape, facilitating downstream tasks including surface
*Xiangyu Zhu and Dong Du contribute equally.
†Corresponding author’s email: [email protected]
Figure 1. We present NerVE, a neural volumetric edge representa-
tion for parametric curve extraction from point clouds. Our method
can predict structured NerVE instead of unstructured edge points
in previous methods, and directly convert NerVE into piece-wise
linear (PWL) curves, reducing the error-prone post-processing of
previous methods into simple graph search on PWL curves. Post-
processing includes (a) Endpoint detection;(b) Graph structure
analysis;(c) Points thinning;(d) Special treatment for closed curves.
reconstruction, normal estimation, and shape classification.
Previous state-of-the-art methods mainly resort to a keypoint
fitting strategy to extract parametric edge curves from a point
cloud. Specifically, they first detect a sparse set of keypoints,
such as the endpoints or points on sharp edges, and then
group these points into individual sets according to prede-
fined topologies. Finally, each point set is converted into a
parametric curve using spline fitting.
Recent approaches have strove to improve the accuracy of
edge point detection by using hand-crafted features [26] or
deep neural networks [2, 25, 36, 41]. Despite the impressive
progress that has been made, existing works still have the
following limitations. 1) The widely adopted point-wise clas-
sification approaches tend to generate noisy estimations – the
predicted edge points typically contain a spurious set of can-
didate points (see Fig. 5), which requires further processing
for keypoint cleaning and increases the risk of false/missing
connections. 2) The grouping procedure highly relies on the
accuracy of endpoint detection. However, it remains difficult
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13601
to accurately locate endpoints especially when the normal
of its surrounding points change smoothly. 3) They require
tedious treatments to cope with different curve topologies,
including curve type estimation, topology-dependent artifact
points removal and curve connection, etc.
Our key observation is that the above issues can be re-
solved if we can directly predict structured edges in the
form of piece-wise linear (PWL) curves from the input point
cloud. This bypasses the problematic keypoint detection and
avoids the error-prone edge extraction in the curve fitting
stage. In addition, PWL curve is a general representation of
free-form curves, removing the need of curve topology esti-
mation and the laborious curve fitting and post-processing
dependent on curve category. Furthermore, PWL curves can
be easily converted to parametric curves using the off-the-
shelf solutions. However, unlike its parametric counterpart,
PWL curves are notoriously difficult to predict due to its
large degrees of freedom.
Towards this end, we propose NerVE , a novel neural edge
representation in a volumetric fashion. As shown in Fig. 3,
NerVE represents 3D structured edges using a regular grid
of volumetric cubes – each cube encodes rich structural
information including 1) one binary indicator of edge occu-
pancy, 2) edge orientations (if any), and 3) one edge point
position (if any). Thereby, NerVE can be readily converted
to PWL curves by connecting the edge points enclosed by
NerVE cubes according to the encoded point connectivity.
The introduction of NerVE brings several advantages. First,
the generated NerVE cubes are structured by itself, which
greatly simplifies the process of curve extraction. Second, it
is fully compatible with the PWL curve representation, and
hence, can deal with all types of curves in a unified manner.
Third, NerVE cubes can be viewed as a coarse representation
of the point cloud. Predicting the occupancy of a volumetric
cube is easier and more robust than point-wise classification.
Therefore, we are less likely to suffer from the issue of miss-
ing curves (see our results in Fig. 6). Lastly, inferring NerVE
can be formulated as a voxel-wise classification and regres-
sion problem, where the well-developed 3D convolutional
networks can be directly employed.
We further propose a volumetric learning framework to
predict NerVE from the input point cloud. We first encode
the point features into a volumetric feature grid with the
same resolution of the output. Then, a multi-head decoder
is used to predict the attributes of a NerVE cube from its
corresponding feature cell. After converting the NerVE
cubes into PWL curves, a specially-tailored post-processing
procedure is proposed to correct potential topology errors in
the resulting curves. Finally, the parametric curves can be
obtained via a straightforward spline fitting algorithm.
We evaluate our method on the ABC dataset [19], a large-
scale collection of computer-aided design (CAD) models
with challenging topology variations. In particular, we com-pare with the state-of-the-art approaches on two different
tasks: edge estimation and parametric curve extraction. Ex-
perimental results show that by leveraging the proposed
NerVE representation, our method can faithfully extract
complete and accurate edges and parametric curves from in-
tricate CAD models, outperforming the other methods both
qualitatively and quantitatively.
We summarize our contributions as follows:
•We propose NerVE , a learnable neural volumetric edge
representation that supports direct estimation of struc-
tured 3D edges, seamless conversion with general PWL
curves, and compatibility with latest volumetric learn-
ing framework.
•A pipeline for parametric curve extraction from point
cloud that consists of a learning-based framework for
faithful NerVE cubes estimation and a post-processing
module for curve topology correction.
•We set a new state-of-the-art on the ABC dataset in the
task of parametric curve extraction from point cloud.
|
Zhao_Rethinking_Gradient_Projection_Continual_Learning_Stability__Plasticity_Feature_Space_CVPR_2023 | Abstract
Continual learning aims to incrementally learn novel
classes over time, while not forgetting the learned knowl-
edge. Recent studies have found that learning would not
forget if the updated gradient is orthogonal to the feature
space. However, previous approaches require the gradient
to be fully orthogonal to the whole feature space, leading to
poor plasticity, as the feasible gradient direction becomes
narrow when the tasks continually come, i.e., feature space
is unlimitedly expanded. In this paper, we propose a space
decoupling (SD) algorithm to decouple the feature space
into a pair of complementary subspaces, i.e., the stability
spaceI, and the plasticity space R.Iis established by
conducting space intersection between the historic and cur-
rent feature space, and thus Icontains more task-shared
bases. Ris constructed by seeking the orthogonal comple-
mentary subspace of I, and thus Rmainly contains task-
specific bases. By putting distinguishing constraints on R
andI, our method achieves a better balance between sta-
bility and plasticity. Extensive experiments are conducted
by applying SD to gradient projection baselines, and show
SD is model-agnostic and achieves SOTA results on publicly
available datasets.
| 1. Introduction
Deep neural networks (DNNs) have achieved promis-
ing performance on various vision tasks, including im-
age classification, object detection, and action recognition
[3, 9, 32, 34, 36]. However, DNNs are typically trained of-
fline on a fixed dataset, and therefore the models are not
able to incrementally learn novel concepts (novel classes),
which has become an emerging need in many real-world
†Corresponding authors.
Feature SpaceBasesGradientFormer Methods
Space
DecouplingStability
CorrelatedPlasticity
Correlated
Gradient Gradient
High Plasticity High StabilityOurs
𝑅 𝐼Figure 1. Left: Recent gradient projection methods. All of them
constrain the gradient to be fully orthogonal to the feature space.
Right: We propose a space decoupling (SD) algorithm to decouple
the feature space into a pair of complementary subspaces, i.e., the
stability space I, and the plasticity space R. To balance stability
and plasticity, more bases are preserved in I, and less in R, while
stricter gradient constraints are put on Iand looser on R.
applications [11, 16, 21, 26, 30].
In this context, continual learning (CL) [14] is proposed,
aiming to continually learn novel concepts, i.e., a series
of learning tasks, while not forgetting the learned knowl-
edge [1, 4, 6, 10, 15, 39]. Recent studies have found that
learning would have less impact on old tasks if the direc-
tion of the gradient is orthogonal to the space spanned by
the features from old tasks [18, 23, 31, 38]. With this mo-
tivation, a couple of continual learning methods referring
to feature space methods have been proposed and can be
generally divided into two classes: (a) Orthogonal based
methods; (b) Null-space based methods.
Orthogonal based methods like GPM [31] and TRGP
[23] calibrate the gradient in the direction fully orthogo-
nal to the feature space, while Null-space based methods
like Adam-NSCL [38], AdNS [18] train the model in the
null space of input features. It is easy to prove that these
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3718
two classes of approaches are equivalent and hold a unified
training paradigm: 1) construct a matrix using the features
from old tasks, e.g., concatenate; 2) utilize this matrix to
approximate a feature space; 3) project the gradient of the
new task to the orthogonal direction of the feature space.
However, we find all the mentioned approaches strictly
require the gradient to be fully orthogonal to the whole fea-
ture space, shown in the left of Figure 1. As the number
of training tasks increases, feature space is unlimitedly ex-
panded which will heavily limit the model updating and
lead to poor plasticity. Therefore, feature space methods
are facing a dilemma in balancing stability and plasticity
[27–29, 33, 40], despite their varied attempts in this issue.
Motivated by this insight, we propose a space decoupling
(SD) algorithm, shown in the right of Figure 1. We decou-
ple the whole feature space into a pair of orthogonal com-
plementary subspaces, i.e., the stability-correlated space I,
and the plasticity-correlated space R. In our implementa-
tions,Iis established by conducting space intersection be-
tween the historic feature space and current feature space,
and thus Icontains more bases shared by old tasks. Ris
constructed by seeking the orthogonal complementary sub-
space of I, and thus Rmainly contains task-specific bases.
As we can see, the update on Iwould significantly incur
forgetting, and the update on Rwould have less impact on
old tasks. Our empirical study also supports this claim by
finding that gradient updates within subspace Ido more in-
terference on old tasks than R(please refer to Section 3.2).
Finally, in the stability-correlated space I, where a slight
change would bring about tremendous forgetting, we pay
more attention to stability by putting more strict constraints
on it. In the plasticity-correlated space Rwhich will have
less impact on old knowledge, we stress plasticity and allow
the model to be updated in a looser way here. Finally, with
SD, the performance of several state-of-the-art gradient pro-
jection methods is improved by a large margin. Below, we
summarize our contributions:
(1) We generalize recent gradient projection methods
[18,23,31,38] into a unified paradigm, under which we give
a new viewpoint about their stability-plasticity dilemma.
(2) We propose a novel Space Decoupling (SD) al-
gorithm to split the whole feature space into stability-
correlated space and plasticity-correlated space. By putting
distinguishing constraints on these subspaces, our method
achieves a better balance between stability and plasticity.
(3) We apply SD to various gradient projection baselines
and show our approach is model-agnostic and effective.
Extensive experiments on benchmark datasets demonstrate
state-of-the-art performance achieved by our approach.
|
Zhu_Curricular_Object_Manipulation_in_LiDAR-Based_Object_Detection_CVPR_2023 | Abstract
This paper explores the potential of curriculum learn-
ing in LiDAR-based 3D object detection by proposing a
curricular object manipulation (COM) framework. The
framework embeds the curricular training strategy into both
the loss design and the augmentation process. For the
loss design, we propose the COMLoss to dynamically pre-
dict object-level difficulties and emphasize objects of dif-
ferent difficulties based on training stages. On top of the
widely-used augmentation technique called GT-Aug in Li-
DAR detection tasks, we propose a novel COMAug strategy
which first clusters objects in ground-truth database based
on well-designed heuristics. Group-level difficulties rather
than individual ones are then predicted and updated during
training for stable results. Model performance and general-
ization capabilities can be improved by sampling and aug-
menting progressively more difficult objects into the train-
ing samples. Extensive experiments and ablation studies re-
veal the superior and generality of the proposed framework.
The code is available at https://github.com/ZZY816/COM.
| 1. Introduction
LiDAR sensors can provide accurate, high-definition 3D
measurements of the surrounding environment. Such 3D in-
formation plays a noninterchangeable role in safety-critical
applications like 3D object detection in self-driving. How-
ever, the rich 3D information from LiDAR sensors does not
come without problems. Usually presented in the form of
a point cloud, LiDAR data suffers from (i) non-uniformity:
the point density decreases monotonically as the laser range
increases; (ii) orderless: the geometry of a point cloud re-
mains unchanged even if all of its points are randomly shuf-
fled. (iii) sparsity: when quantized into voxel grids, a sig-
nificant portion of the voxels are empty;
To build a robust and performant LiDAR object detector,
*The first two authors have equal contribution to this work.
†Jian Yang is the corresponding author ([email protected]).
(a) Early stage.
(b) Later stage.
Figure 1. The proposed Curricular Object Manipulation (COM)
works in an easy-to-hard manner. In early stages, COMAug con-
strains the augmented objects (highlighted in red) to be easy ones
and COMLoss down-weights losses from difficult objects (marked
in boxes with thin lines). Objects with varying degrees of diffi-
culty are inserted into the point clouds in later stages. On the other
hand, hard objects will contribute more to loss values as training
progresses. Best viewed in color.
different data representations have been explored to allevi-
ate the non-uniformity and orderless challenges. Feature
extraction from the raw orderless point cloud can be made
possible by performing radius search or nearest neighbor
search in the 3D Euclidean space [6, 32, 41, 56]. Another
popular solution is to quantize the input point cloud into a
fixed grid of voxels [61] or pillars of voxels [21]. At the
price of quantization error, later processing can be done ef-
ficiently on the regular voxel pillars or grids [52].
But these different data representations do not change
the sparsity of the LiDAR point cloud data. Compared with
image object detection tasks, sparse point clouds contain
much less input stimuli and positive samples for neural net-
work training, as depicted in Figure 1. Thus, effective data
augmentation strategies are critical for faster model con-
vergence and better detection performance [13, 18, 31, 50,
52]. Among them, GT-Aug [52] (see Figure 2) is widely
adopted. GT-Aug first aggregates ground truth labels from
the training dataset into a database. During training, ran-
domly selected samples from the database are inserted into
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1125
Jan!18APJun′1880756570Jan!19Jun′19Jan!20Jun!20Jan!21Jun!21w/.GT-Augw/o. GT-AugA VOD[20]PointRCNN[39]
F-ConvNet[33]STD[57]PV-RCNN[37]
V oxel R-CNN[8]PV-RCNN++[37]
Figure 2. 3D object detection mAP of the car category with hard
difficulties on the KITTI dataset from 2018 to 2021. It is obvi-
ous from the figure that the GT-Aug strategy boosts the KITTI
3D object detection benchmark by a large margin since its incep-
tion [52]. GT-Aug has since become the de facto augmentation
practice in popular open source toolkits [7, 48].
the point cloud to amplify the supervision signal.
Notice that GT-Aug treats all samples in the database
equally, and all epochs of the training process equally. It
has brought to our attention that selecting too many hard
examples at early stages may overwhelm the training, while
selecting too many easy samples at the later stages may
slow the model convergence. Similar conclusions were also
reached independently in the facial recognition field [16].
This important finding raises two questions for the widely
used GT-Aug strategy: (i) at a given training stage, how to
select samples that benefit the current stage the most, (ii) at
different training stages, how to adjust the sampling strate-
gies accordingly. However, solving these two questions is
not yet enough as the original objects in the training sam-
ple can also be ill-suited for current training. Therefore, we
raise one additional question as (iii) how to properly handle
augmented objects as well as original objects can contribute
to the model performance.
This work answers the above questions by leveraging
curriculum learning. Curriculum learning draws inspiration
from the human cognitive process, which begins with eas-
ier concepts and gradually moves on to more complicated
ones [1, 44]. Enlightened by such easy-to-hard paradigm,
we propose a curricular object manipulation (COM) frame-
work for the LiDAR object detection task. Our framework
consists of (i) COMLoss to manipulate the contributions
from objects of different difficulties, and (ii) COMAug to
manipulate the sampling process in GT-Aug.
In the COM framework, we employ the classification
loss as a simple yet effective proxy for object difficulties.
The COMLoss suppresses loss contributions from hard ob-
jects in earlier stages and gradually looses the suppression,
as depicted in Fig. 1. Unfortunately, using classification
score as the difficulty proxy can cause an inevitable para-
dox in COMAug. Specifically, COMAug relies on update-
to-date scores of all objects to perform difficulty-adaptive
augmentation. In contrast, all objects should be sampled
recently for augmentation to update their scores, which is
impossible because of the limited number of augmented ob-
jects in each training frame. We design a clustering based
method to address such paradox: objects with similar diffi-culties are grouped together, and the difficulty estimates are
updated for the groups rather than for the individual objects.
During training, hard groups will be sampled with mono-
tonically increasing probabilities as epoch increases, while
objects within each group will be sampled uniformly. In our
work, objects are grouped by their geometry attributes, such
as distance, dimension, angle, and occupancy ratio.
We demonstrate the efficacy of our proposed method
through extensive experiments and ablation studies. In sum-
mary, our contributions include:
• We propose the COM framework which embeds the
easy-to-hard training strategy into both loss design and
augmentation process in LiDAR-based object detec-
tion. For the loss design, COMLoss is introduced to
dynamically predict object-level difficulties, based on
which we emphasize objects to different extents when
the training proceeds. For the augmentation, a well-
designed COMAug first clusters objects in ground-
truth database with carefully-picked heuristics. During
training, COMAug updates group-level difficulties and
controls sampling process in augmentation in a curric-
ular manner.
• To the best of our knowledge, COM is the first to ex-
plore the potentials of curriculum learning in conven-
tional LiDAR-based 3D object detection task. Exten-
sive experiments and ablation studies reveal the supe-
riority and generality of the proposed framework.
|
Ziwen_AutoFocusFormer_Image_Segmentation_off_the_Grid_CVPR_2023 | Abstract
Real world images often have highly imbalanced content
density. Some areas are very uniform, e.g., large patches of
blue sky, while other areas are scattered with many small
objects. Yet, the commonly used successive grid downsam-
pling strategy in convolutional deep networks treats all ar-
eas equally. Hence, small objects are represented in very
few spatial locations, leading to worse results in tasks such
as segmentation. Intuitively, retaining more pixels repre-
senting small objects during downsampling helps to pre-
serve important information. To achieve this, we propose
AutoFocusFormer (AFF), a local-attention transformer im-
age recognition backbone, which performs adaptive down-
sampling by learning to retain the most important pixels for
the task. Since adaptive downsampling generates a set of
pixels irregularly distributed on the image plane, we aban-
don the classic grid structure. Instead, we develop a novel
point-based local attention block, facilitated by a balanced
clustering module and a learnable neighborhood merging
module, which yields representations for our point-based
versions of state-of-the-art segmentation heads. Experi-
ments show that our AutoFocusFormer (AFF) improves sig-
nificantly over baseline models of similar sizes.
| 1. Introduction
Typical real-world images distribute content unevenly.
Consider the photo of a typical outdoor scene in Fig. 1:
Large swaths of the image contain textureless regions like
the ground, while a few regions contain many small ob-
jects. Despite this, most computer vision neural networks
distribute computation evenly across the image; every pixel,
regardless of texture or importance, is processed with the
same computational cost. Popular convolutional neural net-
works operate on regularly-arranged square patches. Al-
*Work done while Chen Ziwen was an intern at Apple Inc.
ImageRemaining tokens Stage 2
PredictionAFFSwinRemaining tokens Stage 4Comparison between on-grid model Swin and off-grid model AFF . AFF downsamples non-uniformly, automatically focusing on more textured, important image regions, and successfully captures more details in the background. The red pixels indicate the locations of the remaining tokens.Figure 1. Comparison between on-grid model Swin [16] and off-
grid model AFF. The red pixels indicate the locations of the re-
maining tokens. AFF downsamples non-uniformly, automatically
focusing on more textured, important image regions, which lead
to better performance on small objects in the scene.
though recent transformer architectures do not strictly de-
pend on a grid structure, many transformer-based meth-
ods adopt grid-based techniques such as stride-16 convolu-
tions [5] and 7×7square windows for local attention [16].
Despite its popularity, uniform downsampling is less
effective for tasks that require pixel-level details such as
segmentation. Here, uniform downsampling unfortunately
makes tiny objects even tinier – possibly dropping needed,
pixel-level information. To combat this, many techniques
increase the input resolution [6,31] to obtain better segmen-
tation performance. This intuitively helps, as larger input
will lead to higher resolution after downsampling. How-
ever, increasing input resolution is costly in memory and
computation, as this brute-force bandaid neglects the under-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18227
Stage 1Local-AttentionBalanced ClusteringPatch Embedding
Adaptive DownsamplingStage 2Local-AttentionAdaptive DownsamplingBalanced ClusteringStage 4Stage 3Local-AttentionAdaptive DownsamplingBalanced Clustering
Balanced ClusteringCompute ImportanceSelect top x% tokens as merging locationsNeighborhood Merging
Adaptive DownsamplingLocal-Attention Transformer Blocks
Point-Based Pixel Decoder
Red dots: token positions
Clusters shown by colors
Figure 2. The network architecture of AutoFocusFormer. The model consists of four stages, each stage processing a successively down-
sampled set of tokens. Within each stage, tokens first go through balanced clustering, then attend to the tokens in their local neighborhoods
defined by the nearby clusters in the following local-attention blocks, and finally adaptively merge into the set of downsampled output
tokens with weights modulated by the learnable importance scores.
lying issue – namely, uniform downsampling. Some prior
works amend this by irregularly sampling points in the seg-
mentation decoder [13], but by still relying on a uniformly-
downsampled convolutional encoder, these techniques re-
main susceptible to the pitfalls of uniform downsampling.
To address this concern, we need solutions that en-
able computer vision models to allocate computation non-
uniformly across each image. In particular, we need a
downsampling strategy that retains important details, while
more aggressively summarizing texture-less regions such as
sky or road. However, non-uniform downsampling breaks
from the grid structure that existing architectures rely on.
Prior work on adaptive downsampling [8, 14, 27] addresses
this by simply using global attention, but global attention
does not scale to resolutions much higher than that of Ima-
geNet, such as those required for segmentation tasks.
To satisfy this need for adaptive, scalable downsampling
strategies, we propose AutoFocusFormer (AFF) . To our
knowledge, AFF is the first end-to-end segmentation net-
work with successive adaptive downsampling stages . To
scale to higher resolutions required in segmentation tasks,
AFF employs local attention blocks. In order to define
local attention neighborhoods among irregularly sampled
tokens, we develop a novel balanced clustering algorithm
which employs space-filling curves to group irregular loca-
tions into neighborhoods. We also propose a novel adaptive
downsampling module that learns the importance of differ-ent image locations through a differentiable neighborhood
merging process (Fig. 4). Finally, we modify state-of-the-
art segmentation heads so that they can be applied on the
irregular-spaced representations our backbone generates.
Our AutoFocusFormer attains state-of-the-art perfor-
mance with less computational cost across major segmenta-
tion tasks, with especially strong results when using smaller
models. Furthermore, by moving away from the grid struc-
ture, our downsampling strategy can support a larger range
of computational budget by retaining any number of tokens,
rather than operating only at rates of 1/4,1/16etc.
To summarize, our contributions are:
• To our knowledge, we introduce the first end-to-end
segmentation network with successive adaptive down-
sampling stages and with flexible downsampling rates.
• To facilitate a local attention transformer on irregularly
spaced tokens, we propose a novel balanced clustering
algorithm to group tokens into neighborhoods. We also
propose a neighborhood merging module that enables
end-to-end learning of adaptive downsampling.
• We adapt state-of-the-art decoders such as deformable
DETR [49], Mask2Former [2] and HCFormer [34] to
operate on irregularly spaced sets of tokens.
• Results show that our approach achieves state-of-the-
art for both image classification and segmentation with
fewer FLOPs, and improves significantly on the recog-
nition of small objects in instance segmentation tasks.
18228
|
Zheng_PointAvatar_Deformable_Point-Based_Head_Avatars_From_Videos_CVPR_2023 | Abstract
The ability to create realistic animatable and relightable
head avatars from casual video sequences would open up
wide ranging applications in communication and entertain-
ment. Current methods either build on explicit 3D mor-
phable meshes (3DMM) or exploit neural implicit repre-
sentations. The former are limited by fixed topology, while
the latter are non-trivial to deform and inefficient to render.
Furthermore, existing approaches entangle lighting and
albedo, limiting the ability to re-render the avatar in new
environments. In contrast, we propose PointAvatar, a de-
formable point-based representation that disentangles the
source color into intrinsic albedo and normal-dependent
shading. We demonstrate that PointAvatar bridges the gap
between existing mesh- and implicit representations, com-
bining high-quality geometry and appearance with topo-
logical flexibility, ease of deformation and rendering effi-
ciency. We show that our method is able to generate an-
imatable 3D avatars using monocular videos from multi-
ple sources including hand-held smartphones, laptop web-
cams and internet videos, achieving state-of-the-art qual-
ity in challenging cases where previous methods fail, e.g.,
thin hair strands, while being significantly more efficient in
training than competing methods.
contact: [email protected]
project page: https://zhengyuf.github.io/PointAvatar/ | 1. Introduction
Personalized 3D avatars will enable new forms of com-
munication and entertainment. Successful tools for creat-
ing avatars should enable easy data capture, efficient com-
putation, and create a photo-realistic, animatable, and re-
lightable 3D representation of the user. Unfortunately, ex-
isting approaches fall short of meeting these requirements.
Recent methods that create 3D avatars from videos ei-
ther build on 3D morphable models (3DMMs) [ 26,36] or
leverage neural implicit representations [ 32,33,35]. The
former methods [ 8,13,22,23] allow efficient rasterization
and inherently generalize to unseen deformations, but they
cannot easily model individuals with eyeglasses or com-
plex hairstyles, as 3D meshes are limited by a-priori fixed
topologies and surface-like geometries. Recently, neu-
ral implicit representations have also been used to model
3D heads [ 5,11,16,54]. While they outperform 3DMM-
based methods in capturing hair strands and eyeglasses,
they are significantly less efficient to train and render, since
rendering a single pixel requires querying many points
along the camera ray. Moreover, deforming implicit rep-
resentations in a generalizable manner is non-trivial and
existing approaches have to revert to an inefficient root-
finding loop, which impacts training and testing time nega-
tively [ 10,18,25,45,54].
To address these issues, we propose PointAvatar, a novel
avatar representation that uses point clouds to represent the
canonical geometry and learns a continuous deformation
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21057
Efficient
RenderingEasy
AnimationFlexible
TopologyThin
StrandsSurface
Geometry
Meshes ✓ ✓ ✗ ✗ ✓
Implicit Surfaces ✗ ✗ ✓ ✗ ✓
V olumetric NeRF ✗ ✗ ✓ ✓ ✗
Points (ours) ✓ ✓ ✓ ✓ ✓
Table 1. PointAvatar is efficient to render and deform which en-
ables straightforward rendering of full images during training. It
can also handles flexible topologies and thin structures and can re-
construct good surface normals in surface-like regions, e.g., skin.
field for animation. Specifically, we optimize an oriented
point cloud to represent the geometry of a subject in a
canonical space. For animation, the learned deformation
field maps the canonical points to the deformed space with
learned blendshapes and skinning weights, given expression
and pose parameters of a pretrained 3DMM. Compared to
implicit representations, our point-based representation can
be rendered efficiently with a standard differentiable raster-
izer. Moreover, they can be deformed effectively using es-
tablished techniques, e.g., skinning. Compared to meshes,
points are considerably more flexible and versatile. Be-
sides the ability to conform the topology to model acces-
sories such as eyeglasses, they can also represent complex
volume-like structures such as fluffy hair. We summarize
the advantanges of our point-based representation in Tab. 1.
One strength of our method is the disentanglement of
lighting effects. Given a monocular video captured in un-
constrained lighting, we disentangle the apparent color into
the intrinsic albedo and the normal-dependent shading; see
Fig.1. However, due to the discrete nature of points, accu-
rately computing normals from point clouds is a challenging
and costly task [ 6,17,29,37], where the quality can deteri-
orate rapidly with noise, and insufficient or irregular sam-
pling. Hence we propose two techniques to (a) robustly and
accurately obtain normals from learned canonical points,
and (b) consistently transform the canonical point normals
with the non-rigid deformation. For the former, we exploit
the low-frequency bias of MLPs [ 38] and estimate the nor-
mals by fitting a smooth signed distance function (SDF)
to the points; for the latter, we leverage the continuity of
the deformation mapping and transform the normals analyt-
ically using the deformation’s Jacobian. The two techniques
lead to high-quality normal estimation, which in turn propa-
gates the rich geometric cues contained in shading to further
improve the point geometry. With disentangled albedo and
detailed normal directions, PointAvatar can be relit and ren-
dered under novel scene lighting.
As demonstrated using various videos captured with
DSLR, smartphone, laptop cameras, or obtained from the
internet, the proposed representation combines the advan-
tages of popular mesh and implicit representations, and sur-
passes both in many challenging scenarios. In summary, ourcontributions include:
1. We propose a novel representation for 3D animatable
avatars based on an explicit canonical point cloud and
continuous deformation, which shows state-of-the-art
photo-realism while being considerably more efficient
than existing implicit 3D avatar methods;
2. We disentangle the RGB color into a pose-agnostic
albedo and a pose-dependent shading component;
3. We demonstrate the advantage of our methods on a
variety of subjects captured through various commod-
ity cameras, showing superior results in challenging
cases, e.g., for voluminous curly hair and novel poses
with large deformation.
|
Zhu_Knowledge_Combination_To_Learn_Rotated_Detection_Without_Rotated_Annotation_CVPR_2023 | Abstract
Rotated bounding boxes drastically reduce output ambi-
guity of elongated objects, making it superior to axis-aligned
bounding boxes. Despite the effectiveness, rotated detectors
are not widely employed. Annotating rotated bounding boxes
is such a laborious process that they are not provided in many
detection datasets where axis-aligned annotations are used
instead. In this paper, we propose a framework that allows
the model to predict precise rotated boxes only requiring
cheaper axis-aligned annotation of the target dataset1.
To achieve this, we leverage the fact that neural networks
are capable of learning richer representation of the target
domain than what is utilized by the task. The under-utilized
representation can be exploited to address a more detailed
task. Our framework combines task knowledge of an out-of-
domain source dataset with stronger annotation and domain
knowledge of the target dataset with weaker annotation. A
novel assignment process and projection loss are used to en-
able the co-training on the source and target datasets. As a
result, the model is able to solve the more detailed task in the
target domain, without additional computation overhead dur-
ing inference. We extensively evaluate the method on various
target datasets including fresh-produce dataset, HRSC2016
and SSDD. Results show that the proposed method consis-
tently performs on par with the fully supervised approach.
| 1. Introduction
Rotated detectors introduced in recent works [17, 20, 32]
have received attention due to their outstanding performance
for top view images [15, 33, 34]. They reduce the output am-
biguity of elongated objects for downstream tasks making
them superior to axis-aligned detectors in dense scenes with
severe occlusions [18]. However, the rotated annotation is
more expensive compared to axis-aligned annotation. Fur-
*Corresponding author.
1Code is available at: https://github.com/alanzty/KCR-Official
Figure 1. KCR combines the task knowledge of a source dataset
with stronger rotated annotation and the domain knowledge of the
target dataset with weaker axis-aligned annotation, which enables
the model to predict rotated detection on the target domain.
thermore, popular 2D annotation tools such as Sagemaker
Groundtruth2and VGG app3do not support rotated bound-
ing box annotations. As a result, many popular detection
datasets only have axis-aligned annotations [3, 4, 11]. These
problems reduces the potential scope of the implementation
of rotated detectors. In this work, we introduce Knowledge
Combination to learn Rotated object detection, a training
scheme that only requires cheaper axis-aligned annotation
for the target dataset in order to predict rotated boxes.
Neural networks encode data into a latent space, which
is then decoded to optimize the given task. The latent em-
bedding is an abstract representation of the data, containing
much richer information than the output [29]. Early works in
deep learning show that the model implicitly learns to detect
image features such as edges and corners [10, 12], which
can be used for more detailed tasks if decoded properly. We
believe decoding to a more precise task on the target do-
main can be learnt via co-optimizing with a strongly labelled
source dataset. We design a framework that combines task
knowledge of rotated detection from a source dataset, and
the domain knowledge of a disjoint class of objects in the
target dataset with only axis-aligned annotation, as shown
in Figure 1. This approach combines the advantage of both
weakly-supervised learning and transfer learning.
We follow a design principal that the framework should
2https://aws.amazon.com/sagemaker/data-labeling/
3https://www.robots.ox.ac.uk/ vgg/software/via/
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15518
maximize the target domain knowledge learnt by the model
while minimizing the negative impact caused by weaker
labels. This is achieved by co-training the source and tar-
get dataset with projection losses and a novel assignment
process. The design choices are validated through ablation
studies. We conduct extensive experiments to demonstrate
that our framework is robust to a large domain gap between
source and target dataset. Therefore, box orientation can
practically be learnt for free with KCR, due to the availabil-
ity of free public source datasets such as DOTA [32] with
rotated annotations. We show the efficacy of this method on a
fresh-produce dataset with high density of objects and severe
occlusions. The performance (AP50) gap between the pro-
posed method, learning from weak axis-aligned boxes, and
the fully-supervised model learning from strong rotated an-
notation, reduces to only 3.2%for the challenging cucumber
dataset. We apply the same framework to HRSC2016 [16]
and SSDD [27] datasets to show that our method consis-
tently performs on par with fully supervised models. The
performance gap reduces to 1.0%for SSDD. We believe
our approach can greatly increase the usage and impact of
rotated object detectors. The source code will be publicly
available for the community to save future annotation cost.
In summary, our main contributions are as follows:
1)We introduce a framework that combines task knowl-
edge of a strongly labelled source dataset and domain
knowledge of a weakly labelled target dataset.
2)We apply this method in 2D rotated detection task, en-
abling the model to predict rotated bounding box with
only axis-aligned annotation and verify the generality
of the method with several datasets.
3)We demonstrate robustness of the framework to vari-
ous domain gaps between source and target datasets.
Hence, box orientation can be learnt with no additional
annotation cost in practical applications.
|
Zheng_Where_Is_My_Spot_Few-Shot_Image_Generation_via_Latent_Subspace_CVPR_2023 | Abstract
Image generation relies on massive training data that
can hardly produce diverse images of an unseen category
according to a few examples. In this paper, we address
this dilemma by projecting sparse few-shot samples into a
continuous latent space that can potentially generate in-
finite unseen samples. The rationale behind is that we
aim to locate a centroid latent position in a conditional
StyleGAN, where the corresponding output image on that
centroid can maximize the similarity with the given sam-
ples. Although the given samples are unseen for the con-
ditional StyleGAN, we assume the neighboring latent sub-
space around the centroid belongs to the novel category,
and therefore introduce two latent subspace optimization
objectives. In the first one we use few-shot samples as pos-
itive anchors of the novel class, and adjust the StyleGAN to
produce the corresponding results with the new class label
condition. The second objective is to govern the genera-
tion process from the other way around, by altering the cen-
*Equal Contributions.
†Corresponding authors.troid and its surrounding latent subspace for a more pre-
cise generation of the novel class. These reciprocal opti-
mization objectives inject a novel class into the StyleGAN
latent subspace, and therefore new unseen samples can be
easily produced by sampling images from it. Extensive ex-
periments demonstrate superior few-shot generation perfor-
mances compared with state-of-the-art methods, especially
in terms of diversity and generation quality. Code is avail-
able at https://github.com/chansey0529/LSO.
| 1. Introduction
Recent advances in generative models [3, 5, 8, 11, 19, 36]
allow synthesizing of high-quality and realistic images with
diverse styles. However, the success of these models re-
lies heavily on large-scale data. Preparing new data for a
novel class is costly, so it is natural to raise a question, “can
we generate high-quality images with a glance at a few im-
ages?” This leads to the few-shot image generation prob-
lem, where the model is required to generate a novel cate-
gory with only a few images available. Unfortunately, since
the extreme low-shot setting can easily cause catastrophic
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3272
over-fitting, few-shot image generation is still challenging.
Existing methods commonly suppose that the seen mod-
els have implicit generalization ability towards unseen cate-
gories. Based on this assumption, task-specific optimization
is adopted to seek proper initial parameters, which better
generalize to the downstream tasks [6, 25]. Testing phase
generation is another solution, which skips integrating the
information of unseen category into model weights. Never-
theless, the generated images are either with a lot of class-
specific information distortion [7] or fail to restore the de-
tailed features, such as textures [10, 44]. The main assump-
tion of this line of research in model generalization ability
is false, and therefore the model trained on seen data can-
not extract out-of-domain unseen-specific features without
adaptation, e.g., generating a spotted dog via glancing on
a golden retriever, which significantly limits their practical
usage in real-world scenarios. As a consequence, a key fac-
tor to the success of few-shot synthesis is to expose the sam-
ples of unseen classes to the model.
One of the major obstacles is the sparsity of the unseen
samples. Traditional generative networks require model-
ing the continuous distribution for generating diverse im-
ages with unseen-specific features. However, the discrete
data points under the few-shot setting make the model ill-
informed about the inner structure of the unseen distribu-
tion. On the other hand, the pretrained latent spaces of
Style-series models [17–19, 43] are shown to be semanti-
cally interpretable and continuous. This property ideally
fits our problem. Once the proper latent locations of unseen
samples are found, we can complement the marginal region
with the hidden semantic information and form a subspace
for the unseen category. In this way, diverse unseen images
can be generated via sampling from the new subspace.
Based on the above insights, we proposed a novel la-
tent subspace optimization framework for few-shot image
generation. The key idea is to search for the optimal sub-
distribution of unseen using latent anchor localization , and
then align the sub-distribution with the input unseen distri-
bution using latent subspace refinement . To obtain an un-
seen correlated semantic region in the latent space, we first
locate the subspace of the unseen category by faithful an-
chor optimization. Specifically, the latent codes of the un-
seen category are served as reliable latent subspace indica-
tors by inverting the available unseen images into the latent
space. Based on these anchors, the coarse centroid of the
unseen distribution is pulled to the hypothetical point using
a subspace localization loss.
Subsequently, due to the semantic deficiency of few-shot
images, distributional shift exists between the resulting dis-
tribution of our subspace and the real unseen distribution.
To mitigate semantic misalignment, we propose to refine
the latent subspace of unseens. We employ an adversar-
ial training scheme to inject the unseen correlated featuresinto the generator. However, the guidance of the adversar-
ial game easily leads to over-emphasis on transferring the
low-level features, ignoring the learning of unseen seman-
tics ( e.g., fails to generate a wolf but a wolf-like dog). Thus,
the generated images may belong to a completely different
semantic category, though they contain similar textures with
the few-shot examples. To preserve the unseen-specific se-
mantic, we further restrict the latent subspace by a semantic
stabilization loss. Once the StyleGAN and its subspace are
properly optimized, our framework is able to generate di-
verse and high-quality unseen images. We compare to state-
of-the-art methods extensively on different datasets, and we
show significant superiority over them.
In summary, the contribution of this paper is fourfold:
• We delve into few-shot image generation from a novel
perspective of exploring the continuity of the latent
space for discovering unseen category.
• We propose a novel latent subspace optimization
framework to model the distribution of unseen sam-
ples, while injecting category-specific features into the
generated images.
• Experimental results show that our approach achieves
state-of-the-art performances on three datasets, largely
reducing the FID scores by 7.58, 4.37, and 0.98 on
Flowers, AnimalFaces, and VGGFaces respectively
while gaining diversity on most datasets.
• We extend our model to other subfields like image edit-
ing and high-resolution image generation with few-
shot setting. Additionally, we explore the potential of
our framework in few-shot incremental generation.
|
Zhou_Neural_Texture_Synthesis_With_Guided_Correspondence_CVPR_2023 | Abstract
Markov random fields (MRFs) are the cornerstone of
classical approaches to example- based texture synthesis.
Yet, it is not fully valued in the deep learning era. This pa-per aims to re- promote the combination of MRFs and neuralnetworks, i.e., the CNNMRF model, for texture synthesis,with two key observations made. We first propose to com-pute the Guided Correspondence Distance in the nearest
neighbor search, based on which a Guided Correspondence
lossis defined to measure the similarity of the output texture
to the example. Experiments show that our approach sur-passes existing neural approaches in uncontrolled and con-trolled texture synthesis. More importantly, the Guided Cor-respondence loss can function as a general textural loss in,e.g., training generative networks for real- time controlledsynthesis and inversion- based single- image editing. In con-trast, existing textural losses, such as the Sliced Wassersteinloss, cannot work on these challenging tasks.
| 1. Introduction
Example-based texture synthesis has been a long-
standing topic in vision and graphics. It aims to synthesize
*Corresponding authornew textures of any resolution that retain the patterns of agiven exemplar, with no apparent visual flaws and havingrealism. Classical approaches formulate the synthesis as aMarkov Random Field (MRF) problem and solve it by iter-atively optimizing the output patches to be similar to theirnearest neighbor in the input. This MRF-based optimizationframework is not only widely used both in texture synthe-sis [18–20, 25,39], but also adopted in more general tasks
such as image synthesis and editing [1, 7].
Despite the success of MRF optimization, recent atten-
tion has been devoted to utilizing deep neural networks, ei-ther matching the statistics of deep features [12, 16] or train-
ing generative adversarial networks (GANs) [5, 30,33,40].
In this paper, we retake the MRF optimization framework,given its versatility and flexibility in texture synthesis, andcombine it with deep neural networks. We first search thenearest neighbor for each output patch according to Guided
Correspondence Distance over multi-layer deep features.
Then, unlike traditional methods that copy and paste sourcepatches, we define a Guided Correspondence loss that mea-
sures the overall similarity based on all the correspondingpatches, and update the output pixels via back-propagation.
Actually, Li et al .[22] used to explore a CNNMRF
model in 2016, which combines MRF and neural networksfor style transfer. Champandard [6] applied it to texture syn-thesis later. Comparing to traditional texture optimization,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18095
Figure 2. The synthesized textures from CNNMRF model [6] have
obvious repetition and blurry issues.
the main issue of CNNMRF is that its results have a poor
patch diversity and severe blurry artifacts; see, e.g., Fig-ure2. To address that, we made two critical changes in our
approach. First, the Guided Correspondence Distance is de-fined as a weighted sum of various penalty terms. Thus, inthe nearest neighbor search, we can take more factors suchas matching diversity into account rather than only con-sider patch similarity. Second, inspired by the Contextualloss [26] used for matching image statistics in style transfer,we modify the conventional L
2-based MRF energy to ac-
count for contextual similarities. The motivation is that wehope the nearest neighbor we found for a target patch is sig-nificantly closer to it than all other source patches. The so-designed Guided Correspondence loss improves the sharp-ness of the synthesized results substantially. Our frameworkcan be easily extended to various guided scenarios, includ-ing (but not limited to) user annotations, progression maps,and orientation fields. We just add the corresponding penal-ties to the Guided Correspondence Distance.
Experiments show that our approach performs remark-
ably well for texture optimization both in uncontrolled andcontrolled scenarios, reaching state-of-the-art visual qual-ity. Moreover, the Guided Correspondence loss can be usedas a general textural loss. We demonstrate its usage in, e.g.,training feedforward networks for real-time controlled syn-thesis and inversion-based single-image editing. Existingstatistic-based losses, such as the Sliced Wasserstein loss,cannot handle these challenging tasks. Code is available athttps://github.com/EliotChenKJ/Guided-Correspondence- Loss.
|
Zhou_Instance-Aware_Domain_Generalization_for_Face_Anti-Spoofing_CVPR_2023 | Abstract
Face anti-spoofing (FAS) based on domain generaliza-
tion (DG) has been recently studied to improve the gener-
alization on unseen scenarios. Previous methods typically
rely on domain labels to align the distribution of each do-
main for learning domain-invariant representations. How-
ever, artificial domain labels are coarse-grained and sub-
jective, which cannot reflect real domain distributions ac-
curately. Besides, such domain-aware methods focus on
domain-level alignment, which is not fine-grained enough
to ensure that learned representations are insensitive to do-
main styles. To address these issues, we propose a novel
perspective for DG FAS that aligns features on the instance
level without the need for domain labels. Specifically,
Instance-Aware Domain Generalization framework is pro-
posed to learn the generalizable feature by weakening the
features’ sensitivity to instance-specific styles. Concretely,
we propose Asymmetric Instance Adaptive Whitening to
adaptively eliminate the style-sensitive feature correlation,
boosting the generalization. Moreover, Dynamic Kernel
Generator and Categorical Style Assembly are proposed to
first extract the instance-specific features and then generate
the style-diversified features with large style shifts, respec-
tively, further facilitating the learning of style-insensitive
features. Extensive experiments and analysis demonstrate
the superiority of our method over state-of-the-art competi-
tors. Code will be publicly available at this link.
| 1. Introduction
Face anti-spoofing (FAS) plays a critical role in protect-
ing face recognition systems from various presentation at-
tacks, e.g., printed photos, video replay, etc. To cope with
these presentation attacks, a series of FAS works based on
*Equal contribution.
†Corresponding author.
Figure 1. Conventional DG-based FAS approaches typically rely
on artificially-defined domain labels to perform domain-aware do-
main generalization , which cannot guarantee that the learned rep-
resentations are still insensitive to domain-specific styles. In con-
trast, our method does not rely on such domain labels and focuses
on the instance-aware domain generalization via exploring asym-
metric instance adaptive whiting on the fine-grained instance level.
hand-crafted features [3, 15, 23, 33, 47], and deeply-learned
features [12, 19, 28, 49, 51] have been proposed. Although
these methods have achieved promising performance in
intra-dataset scenarios, they suffer from poor generalization
when adapting to various unseen domains.
To improve the generalization ability on unseen do-
mains, recent studies introduce domain generalization (DG)
techniques into the FAS tasks, which utilize the adversar-
ial learning [21, 39, 44] or meta-learning [7, 11, 29, 30, 61]
to learn domain-invariant representations. Despite its grat-
ifying progress, most of these DG-based FAS methods uti-
lize domain labels to align the distribution of each domain
for domain-invariant representations, as shown in Figure
1. However, such domain-aware methods suffer from two
major limitations. Firstly, the artificial domain labels uti-
lized in their methods are very coarse, and cannot accu-
rately and comprehensively reflect the real domain distri-
butions. For example, numerous illumination conditions,
attack types, and background scenes are ignored in the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20453
source domains, which might lead to various fine-grained
sub-domains. Though D2AM [7] tries to alleviate these is-
sues via assigning pseudo domain labels to divide the mixed
source domains, it still manually sets the number of pseudo
source domains and does not solve the problem in essence.
Secondly, such domain-level alignment only constrains fea-
tures from the perspective of distribution, which is not fine-
grained enough to guarantee that all channels of features
are insensitive to the instance-specific styles. Thus, the
learned features might still contain information sensitive to
instance-specific styles when encountering novel samples,
failing to generalize on the unseen domain.
To address these issues, we propose a novel perspec-
tive of DG-FAS that explores the style-insensitive features
and aligns them on a fine-grained instance level without
the need for domain labels, improving the generalization
abilities towards unseen domains. Specifically, we propose
anInstance-Aware Domain Generalization (IADG) frame-
work to dynamically extract generalized representations for
each sample by encouraging their features to be insensitive
to the instance-specific styles. Concretely, we first intro-
duce Asymmetric Instance Adaptive Whitening (AIAW) to
boost the generalization of features via adaptively whiten-
ing the style-sensitive feature correlation for each instance.
Instead of directly learning the domain-agnostic features,
AIAW aims to weaken the feature correlation ( i.e.,covari-
ance matrix) from higher-order statistics on a fine-grained
instance level. Considering the distribution discrepancies
of real and spoof samples, AIAW adopts asymmetric strate-
gies to supervise them, boosting the generalization capabil-
ity. Moreover, to facilitate the learning of style-insensitive
features in AIAW, Dynamic Kernel Generator (DKG) and
Categorical Style Assembly (CSA) are proposed to gen-
erate style-diversified features for further AIAW. Specifi-
cally, DKG models the instance-adaptive features, which
automatically generates instance-adaptive filters that work
with static filters to facilitate comprehensive instance-aware
feature learning. Based on such instance-adaptive features,
CSA simulates instance-wise domain shifts by considering
the instance diversity to generate style-diversified samples
in a wider feature space, which augments real and spoof
faces separately to prevent the label changes in the FAS
task. Our main contributions are three-fold:
•We propose a novel perspective of DG FAS that aligns
feature representations on the fine-grained instance level in-
stead of relying on artificially-defined domain labels.
•We present an innovative Instance-Aware Domain
Generalization (IADG) framework, which actively simu-
lates the instance-wise domain shifts and whitens the style-
sensitive feature correlation to improve the generalization.
•Extensive experiments with analysis demonstrate the
superiority of our method against state-of-the-art competi-
tors on the widely-used benchmark datasets. |
Zhao_Instance-Specific_and_Model-Adaptive_Supervision_for_Semi-Supervised_Semantic_Segmentation_CVPR_2023 | Abstract
Recently, semi-supervised semantic segmentation has
achieved promising performance with a small fraction of
labeled data. However, most existing studies treat all unla-
beled data equally and barely consider the differences and
training difficulties among unlabeled instances. Differen-
tiating unlabeled instances can promote instance-specific
supervision to adapt to the model’s evolution dynamically.
In this paper, we emphasize the cruciality of instance
differences and propose an instance-specific and model-
adaptive supervision for semi-supervised semantic segmen-
tation, named iMAS . Relying on the model’s performance,
iMAS employs a class-weighted symmetric intersection-
over-union to evaluate quantitative hardness of each un-
labeled instance and supervises the training on unlabeled
data in a model-adaptive manner. Specifically, iMAS learns
from unlabeled instances progressively by weighing their
corresponding consistency losses based on the evaluated
hardness. Besides, iMAS dynamically adjusts the augmen-
tation for each instance such that the distortion degree of
augmented instances is adapted to the model’s generaliza-
tion capability across the training course. Not integrating
additional losses and training procedures, iMAS can obtain
remarkable performance gains against current state-of-the-
art approaches on segmentation benchmarks under different
semi-supervised partition protocols1.
| 1. Introduction
Though semantic segmentation studies [6, 28] have
achieved significant progress, their enormous success relies
on large datasets with high-quality pixel-level annotations.
Semi-supervised semantic segmentation (SSS) [20, 30] has
been proposed as a powerful solution to mitigate the re-
quirement for labeled data. Recent research on SSS has
*Equal contribution. The work was done during an internship at Baidu.
†Corresponding authors ([email protected], wangjing-
[email protected]). This work is supported by Australian Research
Council (ARC DP200103223).
1Code and logs: https://github.com/zhenzhao/iMAS .two main branches, including the self-training (ST) [26]
and consistency regularization (CR) [40] based approaches.
[46] follows a self-training paradigm and performs a selec-
tive re-training scheme to train on labeled and unlabeled
data alternatively. Differently, CR-based works [27, 34]
tend to apply data or model perturbations and enforce the
prediction consistency between two differently-perturbed
views for unlabeled data. In both branches, recent research
[13, 19, 47] demonstrates that strong data perturbations like
CutMix can significantly benefit the SSS training. To fur-
ther improve the SSS performance, current state-of-the-art
approaches [1, 42] integrate the advanced contrastive learn-
ing techniques into the CR-based approaches to exploit the
unlabeled data more efficiently. Works in [21, 24] also aim
to rectify the pseudo-labels through training an additional
correcting network.
Despite their promising performance, SSS studies along
this line come at the cost of introducing extra network
components or additional training procedures. In addi-
tion, majorities of them treat unlabeled data equally and
completely ignore the differences and learning difficulties
among unlabeled samples. For instance, randomly and
indiscriminately perturbing unlabeled data can inevitably
over-perturb some difficult-to-train instances. Such over-
perturbations exceed the generalization capability of the
model and hinder effective learning from unlabeled data.
As discussed in [47], it may also hurt the data distribution.
Moreover, in most SSS studies, final consistency losses on
different unlabeled instances are minimized in an average
manner. However, blindly averaging can implicitly empha-
size some difficult-to-train instances and result in model
overfitting to noisy supervision.
In this paper, we emphasize the cruciality of instance dif-
ferences and aim to provide instance-specific supervision
on unlabeled data in a model-adaptive way. There naturally
exists two main questions. First, how can we differentiate
unlabeled samples? We design an instantaneous instance
“hardness,” to estimate 1) the current generalization ability
of the model and 2) the current training difficulties of dis-
tinct unlabeled samples. Its evaluation is closely related to
the training status of the model, e.g.,a difficult-to-train sam-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23705
Quantitative
Hardness
Analysis EMA
Weak
AugmentationWeak
Augmentation
Adaptive Strong
Augmentation
StudentStudentTeacherFigure 1. Diagram of our proposed iMAS. In a teacher-student framework, labeled data (x, y)is used to train the student model, parame-
terized by θs, by minimizing the supervised loss Lx. Unlabeled data u, weakly augmented by Aw(·), is first fed into both the student and
teacher models to obtain predictions psandpt, respectively. Then we perform quantitative hardness evaluation on each unlabeled instance
by strategy ϕ(pt, ps). Such hardness information can be subsequently utilized: 1) to apply an adaptive augmentation, denoted by As(·), on
unlabeled data to obtain the student model’s prediction ˆp; 2) to weigh the unsupervised loss Luin a instance-specific manner. The teacher
model’s weight, θt, is updated by the exponential moving average (EMA) of θsacross the training course.
ple can become easier with the evolution of the model. Sec-
ond, how can we inject such discriminative information into
the SSS procedure? Since the hardness is assessed based on
the model’s performance, we can leverage such information
to adjust the two critical operations in SSS, i.e.,data pertur-
bations and unsupervised loss evaluations, to adapt to the
training state of the model dynamically.
Motivated by all these observations, we propose an
instance-specific and model-adaptive supervision, named
iMAS , for semi-supervised semantic segmentation. As
shown in Figure 1, following a standard consistency reg-
ularization framework, iMAS jointly trains the student
and teacher models in a mutually-beneficial manner. The
teacher model is an ensemble of historical student models
and generates stable pseudo-labels for unlabeled data. In-
spired by empirical and mathematical analysis in [15, 41],
difficult-to-train instances may undergo considerable dis-
agreement between predictions of the EMA teacher and
the current student. Thus in iMAS, we first evaluate the
instance hardness of each unlabeled sample by calculat-
ing the class-weighted symmetric intersection-over-union
(IoU) between the segmentation predictions of the teacher
(the historical) and student (the most recent) models. Then
based on the evaluation, we perform model-adaptive data
perturbations on each unlabeled instance and minimize an
instance-specific weighted consistency loss to train models
in a curriculum-like manner. In this way, different unlabeled
instances are perturbed and weighted in a dynamic fashion,
which can better adapt to the model’s generalization capa-
bility throughout the training processes.Benefiting from this instance-specific and model-
adaptive design, iMAS obtains state-of-the-art (SOTA) per-
formance on Pascal VOC 2012 and Cityscapes datasets un-
der different partition protocols. For example, our method
obtains a high mIOU of 75.3% with only 183 labeled data
on VOC 2012, which is 17.8% higher than the supervised
baseline and 4.3% higher than the previous SOTA. Our
main contributions are summarized as follows,
• iMAS can boost the SSS performance by highlighting the
instance differences, without introducing extra network
components or training losses.
• We perform a quantitative hardness-evaluating analysis
for unlabeled instances in segmentation tasks, based on
the class-weighted teacher-student symmetric IoU.
• We propose an instance-specific and model-adaptive SSS
framework that injects instance hardness into loss eval-
uation and data perturbation to dynamically adapt to the
model’s evolution.
|
Zhou_MonoATT_Online_Monocular_3D_Object_Detection_With_Adaptive_Token_Transformer_CVPR_2023 | Abstract
Mobile monocular 3D object detection (Mono3D) (e.g.,
on a vehicle, a drone, or a robot) is an important yet chal-lenging task. Existing transformer-based offline Mono3Dmodels adopt grid-based vision tokens, which is subopti-mal when using coarse tokens due to the limited available
computational power . In this paper , we propose an online
Mono3D framework, called MonoA TT , which leverages a
novel vision transformer with heterogeneous tokens of vary-ing shapes and sizes to facilitate mobile Mono3D. The core
idea of MonoATT is to adaptively assign finer tokens to ar-
eas of more significance before utilizing a transformer to
enhance Mono3D. To this end, we first use prior knowl-edge to design a scoring network for selecting the mostimportant areas of the image, and then propose a token
clustering and merging network with an attention mecha-nism to gradually merge tokens around the selected areas
in multiple stages. Finally, a pixel-level feature map is re-constructed from heterogeneous tokens before employing aSOTA Mono3D detector as the underlying detection core.Experiment results on the real-world KITTI dataset demon-
strate that MonoATT can effectively improve the Mono3Daccuracy for both near and far objects and guarantee lowlatency. MonoATT yields the best performance compared
with the state-of-the-art methods by a large margin and is
ranked number one on the KITTI 3D benchmark.
| 1. Introduction
Three-dimensional (3D) object detection has long been
a fundamental problem in both industry and academia andenables various applications, ranging from autonomous
vehicles [ 17] and drones, to robotic manipulation and
augmented reality applications. Previous methods have
achieved superior performance based on the accurate depth
information from multiple sensors, such as LiDAR signal
[11,23,35,43,44,69] or stereo matching [ 9,10,21,34,37,57].
In order to lower the sensor requirements, a much cheaper,
more energy-efficient, and easier-to-deploy alternative, i.e.,
*Corresponding authors
;ĂͿ'ƌŝĚͲďĂƐĞĚƚŽŬĞŶƐŝŶŵƵůƚŝƉůĞƐƚĂŐĞƐ
;ďͿHĞƚĞƌŽŐĞŶĞŽƵƐƚŽŬĞŶƐŝŶŵƵůƚŝƉůĞƐƚĂŐĞƐ
Figure 1. Illustration of (a) grid-based tokens used in traditional vi-
sion transformers and (b) heterogeneous tokens used in our adap-tive token transformer (A TT). Instead of equally treating all image
regions, our A TT distributes dense and fine tokens to meaningful
image regions ( i.e., distant cars and lane lines) yet coarse tokens
to regions with less information such as the background.
monocular 3D object detection (Mono3D) has been pro-
posed and made impressive progress. A practical onlineMono3D detector for autonomous driving should meet the
following two requirements: 1) given the constrained com-
putational resource on a mobile platform, the 3D boundingboxes produced by the Mono3D detector should be accu-
rate enough, not only for near objects but also for far ones,
to ensure, e.g. , high-priority driving safety applications; 2)
the response time of the Mono3D detector should be as lowas possible to ensure that objects of interest can be instantlydetected in mobile settings.
Current Mono3D methods, such as depth map based
[15,29,36], pseudo-LiDAR based [ 15,29–31,36,54,57],
and image-only based [ 2,3,12,22,26,28,42,48,51,64–67],
mostly follow the pipelines of traditional 2D object de-
tectors [ 41,42,48,66] to first localize object centers from
heatmaps and then aggregate visual features around each
object center to predict the object’s 3D properties, e.g. , lo-
cation, depth, 3D sizes, and orientation. Although it is con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17493
ceptually straightforward and has low computational over-
head, merely using local features around the predicted ob-ject centers is insufficient to understand the scene-level ge-ometric cues for accurately estimating the depth of objects,making existing Mono3D methods far from satisfactory.
Recently, inspired by the success of transformers in natural
language processing, visual transformers with long-rangeattention between image patches have recently been devel-
oped to solve Mono3D tasks and achieve state-of-the-art
(SOTA) performance [ 19,64]. As illustrated in Figure 1
(a), most existing vision transformers follow the grid-based
token generation method, where an input image is dividedinto a grid of equal image patches, known as tokens. How-
ever, using grid-based tokens is sub-optimal for Mono3Dapplications such as autonomous driving because of the fol-
lowing two reasons: 1) far objects have smaller size and less
image information, which makes them hard to detect withcoarse grid-based tokens; 2) using fine grid-based tokens isprohibitive due to the limited computational power and thestringent latency requirement.
In this paper, we propose an online Mono3D frame-
work, called MonoATT , which leverages a novel vision
transformer with heterogeneous tokens of varying sizes and
shapes to boost mobile Mono3D. We have one key obser-vation that not all image pixels of an object have equivalentsignificance with respect to Mono3D. For instance, pixels
on the outline of a vehicle are more important than those onthe body; pixels on far objects are more sensitive than those
on near objects. The core idea of MonoA TT is to automat-
ically assign fine tokens to pixels of more significance and
coarse tokens to pixels of less significance before utilizinga transformer to enhance Mono3D detection. To this end,
as illustrated in Figure 1(b), we apply a similarity compati-
bility principle to dynamically cluster and aggregate image
patches with similar features into heterogeneous tokens inmultiple stages. In this way, MonoA TT neatly distributes
computational power among image parts of different impor-
tance, satisfying both the high accuracy and low response
time requirements posed by mobile Mono3D applications.
There are three main challenges in designing MonoA TT.
First, it is essential yet non-trivial to determine keypoints onthe feature map which can represent the most relevant infor-mation for Mono3D detection. Such keypoints also serve ascluster centers to group tokens with similar features. Totackle this challenge, we score image features based onprior knowledge in mobile Mono3D scenarios. Specifically,features of targets ( e.g. , vehicles, cyclists, and pedestrians)
are more important than features of the background. More-over, more attention is paid to features of distant targets and
the outline of targets. Then, a predefined number of key-
points with the highest scores are selected as cluster cen-
ters to guide the token clustering in each stage. As a result,an image region with dense keypoints will eventually be as-signed with fine tokens while a region with sparse keypoints
will be assigned with coarse tokens.
Second, given the established cluster centers in each
stage, how to group similar tokens into clusters and ef-fectively aggregate token features within a cluster is non-
intuitive. Due to the local correlation of 2D convolu-
tion, using naive minimal feature distance for token clus-tering would make the model insensitive to object outlines.
Furthermore, a straightforward feature averaging schemewould be greatly affected by noise introduced by outlier to-
kens. To deal with these issues, we devise a token clusteringand merging network. It groups tokens into clusters, takingboth the feature similarity and image distance between to-kens into account, so that far tokens with similar featuresare more likely to be designated into one cluster. Then, it
merges all tokens in a cluster into one combined token and
aggregates their features with an attention mechanism.
Third, recovering multi-stage vision tokens to a pixel-
level feature map is proved to be beneficial for vision trans-
formers [ 46,62]. However, how to restore a regular image
feature map from heterogeneous tokens of irregular shapes
and various sizes is challenging. To transform adaptive to-kens of each stage into feature maps, we propose an efficient
multi-stage feature reconstruction network. Specifically, thefeature reconstruction network starts from the last stage of
clustering, gradually upsamples the tokens, and aggregates
the token features of the previous stage. The aggregated
tokens correspond to the pixels in the feature map one by
one and are reshaped into a feature map. As a result, accu-
rate 3D detection results can be obtained via a conventional
Mono3D detector using the enhanced feature map.
Experiments on KITTI dataset [ 17] demonstrate that our
method outperforms the SOTA methods by a large margin.
Such a framework can be applied to existing Mono3D de-tectors and is practical for industrial applications. The pro-posed MonoA TT is ranked number one on the KITTI 3D
benchmark by submission. The whole suite of the code base
will be released and the experimental results will be posted
to the public leaderboard. We highlight the main contri-butions made in this paper as follows: 1) a novel online
Mono3D framework is introduced, leveraging an adaptivetoken transformer to improve the detection accuracy andguarantee a low latency; 2) a scoring network is proposed,which integrates prior knowledge to estimate keypoints for
progressive adaptive token generation; 3) a feature recon-
struction network is designed to reconstruct a detailed im-
age feature map from adaptive tokens efficiently.
|
Zhou_How_Can_Objects_Help_Action_Recognition_CVPR_2023 | Abstract
Current state-of-the-art video models process a video
clip as a long sequence of spatio-temporal tokens. How-
ever, they do not explicitly model objects, their interactions
across the video, and instead process all the tokens in the
video. In this paper, we investigate how we can use knowl-
edge of objects to design better video models, namely to
process fewer tokens and to improve recognition accuracy.
This is in contrast to prior works which either drop tokens
at the cost of accuracy, or increase accuracy whilst also
increasing the computation required. First, we propose an
object-guided token sampling strategy that enables us to re-
tain a small fraction of the input tokens with minimal im-
pact on accuracy. And second, we propose an object-aware
attention module that enriches our feature representation
with object information and improves overall accuracy. Our
resulting model, ObjectViViT, achieves better performance
when using fewer tokens than strong baselines. In partic-
ular, we match our baseline with 30%,40%, and 60% of
the input tokens on SomethingElse, Something-something
v2, and Epic-Kitchens, respectively. When we use Object-
ViViT to process the same number of tokens as our baseline,
we improve by 0:6to4:2points on these datasets.
| 1. Introduction
Video understanding is a central task of computer vi-
sion and great progresses have been made recently with
transformer-based models which interpret a video as a se-
quence of spatio-temporal tokens [1,3,13,16,30,55]. How-
ever, videos contain a large amount of redundancy [53], es-
pecially when there is little motion or when backgrounds re-
main static. Processing videos with all these tokens is both
inefficient and distracting. As objects conduct motions and
actions [19,47], they present an opportunity to form a more
compact representation of a video, and inspire us to study
how we can use them to understand videos more accurately
and efficiently. Our intuition is also supported biologically
that we humans perceive the world by concentrating our fo-
cus on key regions in the scene [18, 42].
Figure 1. How can objects help action recognition? Consider
this action of picking up a bowl on a countertop packed with
kitchenware. Objects provide information to: (1) associate image
patches (colorful) from the same instance, and identify candidates
for interactions; (2) selectively build contextual information from
the redundant background patches (dark).
In this paper, we explore how we can use external ob-
ject information in videos to improve recognition accuracy,
and to reduce redundancy in the input (Figure 1). Cur-
rent approaches in the literature have proposed object-based
models which utilize external object detections to improve
action recognition accuracy [21, 35]. However, they aim
to build architectures to model objects and overall bring a
notable computational overhead. We show besides gain-
ing accuracy, objects are extremely useful to reduce token
redundancy, and give a more compact video representa-
tion. Fewer tokens also enable stronger test strategies (e.g.,
multi-crop, longer videos), and overall improve accuracy
further. On the other hand, prior work on adaptive trans-
formers [36, 45, 48, 50] dynamically reduce the number of
tokens processed by the network conditioned on the input.
Since these methods learn token-dropping policies end-to-
end without external hints, there can be a chicken-and-egg
problem: we need good features to know which token to
drop, and need many tokens to learn good features. As a re-
sult, they usually suffer from a performance decrease when
dropping tokens. We show we can perform both goals,
namely reducing tokens processed by the transformer, as
well as improving accuracy, within a unified framework.
Concretely, we propose an object -based video vision
transformer, ObjectViViT. As shown in Figure 2, we first
propose an object-guided token sampling strategy (OGS)
that uses object locations to identify foreground and back-
ground tokens. We retain relevant foreground tokens as
is, and aggressively downsample background tokens be-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2353
Input videoSpace-time tokensObject-tokensInput Detections…Object-Guided Token Sampling…
Sampled tokensObject-Aware Attention ModuleFigure 2. Illustration of our object-based video vision transformer, ObjectViViT. ObjectViViT takes raw video pixels and off-the-shelf
object detections (bounding boxes) as input, and runs space-time attention on video tokens [1]. We use the detection boxes in two ways.
(1): we use object locations to downsample the patch tokens before running transformers (Object-Guided Token Sampling, more details in
Figure 3). (2): we run a customized attention module that creates object tokens from object-patch relations and uses them to enhance patch
features (Object-Aware Attention Module, more details in Figure 4).
fore forwarding them to the transformer module. Secondly,
to fully leverage the relation between objects and the un-
structured spatial-temporal patches, we introduce an object-
aware attention module (OAM). This attention module first
creates object tokens by grouping patch tokens from the
same object using an object-weighted pooling, and then ap-
plies space-time attention on the concatenated object and
patch tokens. This way, patch features are augmented with
their related object information. Both OGS and OAM are
complementary. They can be used individually to improve
either token-compactness or accuracy, or can be used to-
gether to get benefits of both.
We validate our method with extensive experiments on
SomethingElse [32], Something-Something [17], and the
Epic Kitchens datasets [8]. Using our object-guided to-
ken sampling, we find that we can process 60%∼90%
of the input tokens without losing any accuracy. And by
using our object-aware attention module alone, we outper-
form a competitive ViViT [1] baseline by 0:6to2:1points.
Combining both modules, ObjectViViT improves token-
compactness and accuracy even further, matching baseline
performance by processing 30%,40%, and 60% of the in-
put tokens for the three datasets, respectively. Finally, un-
der the same number of processed tokens but a higher tem-
poral resolution, our model with dropped tokens improve
upon baselines by up to 4:2points. Our code is released at
https://github.com/google-research/scenic.
|
Zhou_Human_Body_Shape_Completion_With_Implicit_Shape_and_Flow_Learning_CVPR_2023 | Abstract
In this paper, we investigate how to complete human
body shape models by combining shape and flow estimation
given two consecutive depth images. Shape completion is
a challenging task in computer vision that is highly under-
constrained when considering partial depth observations.
Besides model based strategies that exploit strong priors,
and consequently struggle to preserve fine geometric de-
tails, learning based approaches build on weaker assump-
tions and can benefit from efficient implicit representations.
We adopt such a representation and explore how the motion
flow between two consecutive frames can contribute to the
shape completion task. In order to effectively exploit the
flow information, our architecture combines both estima-
tions and implements two features for robustness: First, an
all-to-all attention module that encodes the correlation be-
tween points in the same frame and between corresponding
points in different frames; Second, a coarse-dense to fine-
sparse strategy that balances the representation ability and
the computational cost. Our experiments demonstrate that
the flow actually benefits human body model completion.
They also show that our method outperforms the state-of-
the-art approaches for shape completion on 2benchmarks,
considering different human shapes, poses, and clothing. | 1. Introduction
The inference of human body shape information from
depth observations has become a standard problem in com-
puter vision. Depth sensors are now common and enable
the digitization of humans using every day devices such as
tablets or mobile phones, in turn opening a way to new con-
sumer applications that build on this ability, e.g. virtual try
on or avatar applications. Solving the problem efficiently is
however difficult given human body observations that are,
by construction, incomplete with a single frame. Consider-
ing several frames over time, as often available, can how-
ever improve the body shape estimation, provided that tem-
poral consistency is effectively exploited. In this paper we
consider how to build complete human shape models given
these partial depth observations. Particularly we investigate
how the combination of shape and motion flow estimations
can benefit such shape completion tasks.
Different strategies for shape completion have been ex-
plored that exploit various priors over human shapes. Para-
metric body models such as SMPL [24] can be used as
in [5, 31, 34]. The strong prior assumed with a paramet-
ric model ensures spatially and temporally coherent human
shape predictions. However these predictions are inherently
restricted to limited shape spaces. The preservation of the
geometric details that can be present in depth maps, e.g.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12901
face attributes or cloth wrinkles, is arduous. Other strate-
gies build on weaker priors, relying on learning to char-
acterize shapes and their completion. Early contributions
in this respect [9, 40, 52] explore encoder-decoder network
architectures with 3D convolutions and successfully pre-
dict complete distance fields in explicit voxel grids. They
were subsequently extended to implicit representations that
can provide continuous 3D shape functions such as occu-
pancy [7,28], or distance fields [8,32], with limited memory
costs compared to explicit voxel representations. Further-
more, temporal features provided by depth map sequences
can also be accounted for with implicit representations that
then become spatio-temporal [58]. Yet, without explicit cor-
respondences over time, learning based methods can only
partially exploit temporal consistency.
Such correspondences are encoded in the motion field
between the input depth maps. This field, the scene flow,
is traditionally estimated pixel-wise as an extension of the
2D optical flow. Recent learning-based strategies [22, 50]
generally focus on observed points only and do not target
shape estimation nor completion. Closer to this objective,
OFlow [30] proposes a 4D model that combines shape and
flow information in an implicit continuous representation.
While the method can account for point clouds, it does not
easily extend to shape completion with depth maps. Fur-
thermore the shape and flow are estimated independently
whereas we advocate a combined estimation of both.
To this aim, we propose a learning-based approach that
considers two consecutive depth images as input and esti-
mates a continuous complete representation of both shape
occupancy (SDF) and motion as implicit functions, lever-
aging their representational advantages demonstrated for
both problems independently. Our experiments show that
such a combined estimation benefits the shape completion
task with results that outperform existing works on standard
datasets. The proposed approach is pyramidal and considers
image features that are extracted in a coarse to fine manner,
preserving both local and more global shape properties. In
addition, with the aim to enforce consistency in both spatial
and temporal domains, we take inspiration from the scene
flow work [47] and introduce an all-to-all attention mech-
anism that accounts for spatial and temporal correlations
between points in the two frames considered. Comprehen-
sive ablation tests demonstrate the individual contributions
of the pyramidal framework and attention mechanisms. Ex-
periments were conducted on DFAUST [6] and CAPE [26]
with both undressed and dressed humans. We provide com-
parisons with the state-of-the-art approaches for both shape
and flow estimations and show consistent shape completion
improvements with our method.MethodShape Continuous Continuous Detail Scene
Completion Shape Rep. Flow Rep. Preservation Represent.
SceneFlow [22, 47, 50] 7 7 7 7 Points
4DComplete [19] 3 7 7 3 V oxels
STIF [58] 3 3 7 3 Implicit
NPMs [31] 3 3 7 3 Para. model
OFlow [30] 3 3 3 7 Implicit
Ours 3 3 3 3 Implicit
Table 1. Classification of related methods with respect to their
abilities to: handle partial inputs; provide continuous shape and
flow representations; preserve geometric details in the observa-
tions.
|
Zheng_NeuFace_Realistic_3D_Neural_Face_Rendering_From_Multi-View_Images_CVPR_2023 | Abstract
Realistic face rendering from multi-view images is bene-
ficial to various computer vision and graphics applications.
Due to complex spatially-varying reflectance properties and
geometry characteristics of faces, however, it remains chal-
lenging to recover 3D facial representations both faithfully
and efficiently in the current studies. This paper presents a
novel 3D face rendering model, namely NeuFace , to learn
accurate and physically-meaningful underlying 3D repre-
sentations by neural rendering techniques. It naturally in-
corporates the neural BRDFs into physically based render-
ing, capturing sophisticated facial geometry and appear-
ance clues in a collaborative manner. Specifically, we intro-
duce an approximated BRDF integration and a simple yet
new low-rank prior, which effectively lower the ambiguities
and boost the performance of the facial BRDFs. Extensive
experiments are performed to demonstrate the superiority
of NeuFace in human face rendering, along with a decent
generalization ability to common objects. Code is released
at NeuFace.
| 1. Introduction
Rendering realistic human faces with controllable view-
points and lighting is now becoming ever increasingly im-
portant with its applications ranging from game production,
movie industry, to immersive experiences in the Metaverse.
Various factors, including the sophisticated geometrical dif-
ferences among individuals, the person-specific appearance
idiosyncrasies, along with the spatially-varying reflectance
properties of skins, collectively make faithful face rendering
a rather difficult problem.
According to photogrammetry, the pioneering studies on
this issue generally leverage complex active lighting setups,
e.g., LightStage [8], to build 3D face models from multiple
*Corresponding author.
Rendered
Diffuse
Specular
Geometry RelightingUnprocessed Facial Multiview Images
…
Figure 1. Demonstration of the face rendering results and recov-
ered underlying 3D representations.
photos of an individual, where accurate shape attributes and
high-quality diffuse and specular reflectance properties are
commonly acknowledged as the premises of its success. An
elaborately designed workflow is required, typically involv-
ing a series of stages such as camera calibration, dynamic
data acquisition, multi-view stereo, material estimation, and
texture parameterization [42]. While a compelling and con-
vincing 3D face model can be finally obtained, this output
highly depends on the expertise of the engineers and artists
with significant manual efforts, as the multi-step process in-
evitably brings diverse optimization goals.
Recently, 3D neural rendering, which offers an end-to-
end alternative, has demonstrated promising performance in
recovering scene properties from real-world imageries, such
as view-dependent radiance [26, 28, 36, 38, 47] and geome-
try [33, 48, 49, 54, 55]. It’s mainly credited to the disentan-
glement of the learnable 3D representations and the differ-
entiable image formation process, free of the tedious pho-
togrammetry pipeline. However, like classical function fit-
ting, inverse rendering is fundamentally under-constrained,
which may incur badly-conditioned fits of the underly-
ing 3D representations, especially for intricate cases, e.g.,
non-Lambertian surfaces with view-dependent highlights.
With the trend in the combination of computer graphics
and learning techniques, several attempts take advantage
of physically motivated inductive biases and present Phys-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16868
ically Based Rendering (PBR) [14, 31, 47, 56], where Bidi-
rectional Reflectance Distribution Functions (BRDFs) are
widely adopted. By explicitly mimicking the interaction
of the environment light with the scene, they facilitate net-
work optimization and deliver substantial gains. Unfortu-
nately, the exploited physical priors are either heuristic or
analytic [7, 20, 44], limited to a small set of real-world ma-
terials, e.g., metal, incapable of describing human faces.
For realistic face rendering, the most fundamental issue
lies in accurately modeling the optical properties of multi-
layered facial skin [21]. In particular, the unevenly dis-
tributed fine-scale oily layers and epidermis reflect the inci-
dent lights irregularly, leading to complex view-dependent
and spatially-varying highlights. This characteristic and the
low-textured nature of facial surfaces strongly amplify the
shape-appearance ambiguity. Moreover, subsurface scatter-
ing between the underlying dermis and other skin layers fur-
ther complicates this problem.
In this paper, we follow the PBR paradigm for its poten-
tial in learning 3D representations and make the first step
towards realistic 3D neural face rendering, mainly target-
ing complex skin reflection modeling. Our method, namely
NeuFace , is able to recover faithful facial reflectance and
geometry from only multi-view images. Concretely, we es-
tablish a PBR framework to learn neural BRDFs to describe
facial skin, which simulates physically-correct light trans-
port with a much higher representation capability. By using
a differentiable Signed Distance Function (SDF) based rep-
resentation, i.e.,ImFace [61], as the shape prior, the facial
appearance and geometry field can be synchronously opti-
mized in inverse rendering.
Compared to the analytic BRDFs, the neural ones allow
richer representations for sophisticated material like facial
skin. In spite of this superiority, such representations pose
challenges to computational cost and data demand during
training. To tackle these difficulties, the techniques in real-
time rendering [1] are adapted to separate the hemisphere
integral of neural BRDFs, where the material and light in-
tegrals are individually learned instead, bypassing the mas-
sive Monte-Carlo sampling phase [34] required by numer-
ical solutions. Furthermore, a low-rank prior is introduced
into the spatially-varying facial BRDFs, which greatly re-
stricts the solution space thereby diminishing the need for
large-scale training observations. These model designs in-
deed enable NeuFace to accurately and stably describe how
the light interacts with the facial surface as in the real 3D
space. Fig. 1 displays an example.
The main contributions of this study include: 1) A novel
framework with naturally-bonded PBR as well as neural
BRDF representations, which collaboratively captures fa-
cial geometry and appearance properties in complicated fa-
cial skin. 2) A new and simple low-rank prior, which sig-
nificantly facilitates the learning of neural BRDFs and im-proves the appearance recovering performance. 3) Impres-
sive face rendering results from only multi-view images,
applicable to various applications such as relighting, along
with a decent generalization ability to common objects.
|
Zhao_Zero-Shot_Text-to-Parameter_Translation_for_Game_Character_Auto-Creation_CVPR_2023 | Abstract
Recent popular Role-Playing Games (RPGs) saw the
great success of character auto-creation systems. The bone-
driven face model controlled by continuous parameters (like
the position of bones) and discrete parameters (like the
hairstyles) makes it possible for users to personalize and
customize in-game characters. Previous in-game character
auto-creation systems are mostly image-driven, where fa-
cial parameters are optimized so that the rendered charac-
ter looks similar to the reference face photo. This paper pro-
poses a novel text-to-parameter translation method (T2P) to
achieve zero-shot text-driven game character auto-creation.
With our method, users can create a vivid in-game char-
acter with arbitrary text description without using any ref-
erence photo or editing hundreds of parameters manually.
In our method, taking the power of large-scale pre-trained
multi-modal CLIP and neural rendering, T2P searches both
continuous facial parameters and discrete facial parame-
*Corresponding Authors.ters in a unified framework. Due to the discontinuous pa-
rameter representation, previous methods have difficulty in
effectively learning discrete facial parameters. T2P , to our
best knowledge, is the first method that can handle the op-
timization of both discrete and continuous parameters. Ex-
perimental results show that T2P can generate high-quality
and vivid game characters with given text prompts. T2P
outperforms other SOTA text-to-3D generation methods on
both objective evaluations and subjective evaluations.
| 1. Introduction
Role-Playing Games (RPGs) are praised by gamers for
providing immersive experiences. Some of the recent pop-
ular RPGs, like Grand Theft Auto Online1and Naraka2,
have opened up character customization systems to play-
ers. In such systems, in-game characters are bone-driven
and controlled by continuous parameters, like the position,
1https://www.rockstargames.com/GTAOnline
2http://www.narakathegame.com
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21013
rotation, scale of each bone, and discrete parameters, like
the hairstyle, beard styles, make-ups, and other facial el-
ements. By manually adjusting these parameters, players
can control the appearance of the characters in the game
according to their personal preferences, rather than using
predefined character templates. However, it is cumbersome
and time-consuming for users to manually adjust hundreds
of parameters - usually taking up to hours to create a char-
acter that matches their expectations.
To automatically create in-game characters, the method
named Face-to-parameter translation (F2P) was recently
proposed to automatically create game characters based on
a single input face image [38]. F2P and its variants [39, 41]
have been successfully used in recent RPGs like Narake
and Justice, and virtual meeting platform Yaotai. Recent
3D face reconstruction methods [2, 7, 26, 33, 42–44] can
also be adapted to create game characters. However, all
the above-mentioned methods require reference face pho-
tos for auto-creation. Users may take time to search, down-
load and upload suitable photos for their expected game
characters. Compared with images, text prompts are more
flexible and time-saving for game character auto-creation.
A very recent work AvatarCLIP [10] achieved text-driven
avatar auto-creation and animation. It optimizes implicit
neural networks to generate characters. However, the cre-
ated characters are controlled by implicit parameters, which
lack explicit physical meanings, thus manually adjusting
them needs extra designs. This will be inconvenient for
players or game developers to further fine-tune the created
game characters as they want.
To address the above problems, we propose text-to-
parameter translation (T2P) to tackle the in-game charac-
ter auto-creation task based on arbitrary text prompts. T2P
takes the power of large-scale pre-trained CLIP to achieve
zero-shot text-driven character creation and utilizes neural
rendering to make the rendering of in-game characters dif-
ferentiable to accelerate the parameters optimization. Pre-
vious works like F2Ps give up controlling discrete facial
parameters due to the problem of discontinuous parameter
gradients. To our best knowledge, the proposed T2P is the
first method that can handle both continuous and discrete
facial parameters optimization in a unified framework to
create vivid in-game characters. F2P is also the first text-
driven automatic character creation suitable for game envi-
ronments.
Our method consists of a pre-training stage and a text-
to-parameter translation stage. In the pre-training stage, we
first train an imitator to imitate the rendering behavior of the
game engine to make the parameter searching pipeline end-
to-end differentiable. We also pre-train a translator to trans-
late the CLIP image embeddings of random game charac-
ters to their facial parameters. Then at the text-to-parameter
translation stage, on one hand, we fine-tune the translatoron un-seen CLIP text embeddings to predict continuous pa-
rameters given text prompt rather than images, on the other
hand, discrete parameters are evolutionally searched. Fi-
nally, the game engine takes in the facial parameters and
creates the in-game characters which correspond to the text
prompt described, as shown in Fig 1. Objective evaluations
and subjective evaluations both indicate our method outper-
forms other SOTA zero-shot text-to-3D methods.
Our contributions are summarized as follows:
1) We propose a novel text-to-parameter translation
method for zero-shot in-game character auto-creation. To
the best of our knowledge, we are the first to study text-
driven character creation ready for game environments.
2) The proposed T2P can optimize both continuous and
discrete parameters in a unified framework, unlike earlier
methods giving up controlling difficult-to-learn discrete pa-
rameters.
3) The proposed text-driven auto-creation paradigm is
flexible and friendly for users, and the predicted physically
meaningful facial parameters enable players or game devel-
opers to further finetune the game character as they want.
|
Zou_CLOTH4D_A_Dataset_for_Clothed_Human_Reconstruction_CVPR_2023 | Abstract
Clothed human reconstruction is the cornerstone for cre-
ating the virtual world. To a great extent, the quality of re-
covered avatars decides whether the Metaverse is a passing
fad. In this work, we introduce CLOTH4D, a clothed hu-
man dataset containing 1,000 subjects with varied appear-
ances, 1,000 3D outfits, and over 100,000 clothed meshes
with paired unclothed humans, to fill the gap in large-
scale and high-quality 4D clothing data. It enjoys ap-
pealing characteristics: 1) Accurate and detailed cloth-
ing textured meshes—all clothing items are manually cre-
ated and then simulated in professional software, strictly
following the general standard in fashion design. 2) Sep-
arated textured clothing and under-clothing body meshes,
closer to the physical world than single-layer raw scans.
3) Clothed human motion sequences simulated given a set
of 289 actions, covering fundamental and complicated dy-
namics. Upon CLOTH4D, we novelly designed a series oftemporally-aware metrics to evaluate the temporal stability
of the generated 3D human meshes, which has been over-
looked previously. Moreover, by assessing and retraining
current state-of-the-art clothed human reconstruction meth-
ods, we reveal insights, present improved performance, and
propose potential future research directions, confirming our
dataset’s advancement. The dataset is available at1.
| 1. Introduction
As we enter the volumetric and XR content era, re-
searchers have been trailblazing their way into the Meta-
verse. With the converging of technologies and practical
applications, e.g., fashion NFTs (non-fungible tokens), im-
mersive AR and VR, and games, clothed human recon-
struction demands are rapidly growing. While current re-
search has made astonishing results in creating digital hu-
1www.github.com/AemikaChow/AiDLab-fAshIon-Data
*X. Zou and X. Han contribute equally.†Corresponding author.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12847
Table 1. Comparisons of CLOTH4D with existing representative datasets. Gray color indicates synthetic datasets generated with graphics
engines. #Subjects: number of peoples in different appearances; #Action: number of actions adopted; #Scans: numbers of 3D meshes;
2D Pattern: 2D clothing pattern; TexCloth: with textured clothed model; TexHuman: with textured naked human model. w/ SMPL: with
registered SMPL [33] parameters. Public: publicly available and free of charge. Photorealistic: whether the images in the dataset are
realistic. -: not applicable or reported. CLOTH4D presents more desirable characteristics compared with others.
Dataset #Subjects #Action
#Scan 2D Pattern TexCloth TexHuman w/ SMPL Public Photorealistic
BUFF [
52] 6 - 13.6k - ✓ - ✓ ✓ ✓
RenderPeople [1] - - 825 - ✓ - ✓ - ✓
DeepWrinkles [30] 2 2 9.2k - ✓ - - - ✓
CAPE [35] 15 600 140k - - - ✓ ✓ ✓
THuman2.0 [51] 200 - 525 - ✓ - ✓ ✓ ✓
DRAPE [16
] 7 23 24.5k - - - - - -
Wanget
al. [47] - - 24k ✓ ✓ - ✓ ✓ -
3DPeople [40
] 80 72 - - ✓ - - - ✓
DCA [43
] - 56 7.1k - - - ✓ - -
GarNet [17
] 600 - 18.8k - - - ✓ ✓ -
TailorNet [
38] 9 - 5.5k - ✓ - ✓ ✓ -
Cloth3D [8
] 8.5k 7.9k 2.1M - ✓ - ✓ ✓ -
Cloth3D++ [36
] 9.7k 8k 2.2M ✓ ✓ ✓ ✓ ✓ -
CLO TH4D 1k 289 100k ✓ ✓ ✓ ✓ ✓ ✓
mans, these reconstructed meshes have issues, e.g., flex-
ible body motions and diverse appearances, owing to the
lack of datasets with richness in clothing and realistic dy-
namics of garments. To this end, we introduce CLOTH4D,
an open-sourced dataset facilitating physically plausible dy-
namic clothed human reconstruction.
Prior to us, many datasets have been collected, and we
sort out them in Table 1. Currently, scanned datasets are
widely adopted as they are photorealistic and can be eas-
ily processed to watertight meshes, which does an excel-
lent favor for current deep models to learn an implicit func-
tion (e.g., signed distance function) followed by marching
cubes [34] for surface reconstruction. However, it is born
with some weaknesses: 1) Scanned meshes are single-layer
and inherently fail to capture the space between clothing
and skin surface. Thus, body shape under clothing can-
not be accurately inferred, let alone the multi-layer and thin
clothing structures as in the real physical world. 2) It is
time-consuming and expensive to obtain high-quality and
large-scale temporal scanned sequences (i.e., 4D scanned
sequences) due to the limited efficiency and precision of 4D
scanners, especially for complicated clothing and large mo-
tions. Although synthetic datasets can to some extent over-
come these limitations, existing synthetic datasets are either
of small scale in terms of appearances and motions or are
highly unrealistic. Moreover, many datasets are not made
publicly available and free.
In contrast, CLOTH4D possesses several attractive at-
tributes: 1) We made great efforts to the diversity and
quality of clothing. All clothes are manually designed in
CLO [3] and cater to the requirement of the fashion indus-
try. 2) Meshes in CLOTH4D are clothing/humans sepa-
rated. Such flexibility makes studying and modeling therelations and interactions between clothing simulation and
body movement possible. 3) CLOTH4D provides plenty of
temporal motion sequences with realistic clothing dynam-
ics. As the human body moves, the dressed clothing, e.g.,
the skirt in Figure 1, naturally deforms. 4) The dataset is
large-scale and openly accessible.
To demonstrate the advantages of CLOTH4D, we use it
to evaluate the state-of-the-art (SOTA) clothed human re-
construction methods. In addition to the generally adopted
static evaluation metrics, we propose a set of temporally-
aware metrics to assess the temporal coherence in a video
inference scenario thanks to the rich and true-to-life 4D syn-
thetic sequences in the dataset. Quantitative and qualitative
results of SOTA methods on CLOTH4D suggest that our
dataset is challenging and the temporal stability of the re-
constructed mesh is vital for evaluating the perceptual qual-
ity. Meanwhile, we retrain SOTA methods on CLOTH4D,
revealing interesting observations of how they perform on
multi-layer meshes with thin clothing structures. With in-
depth analysis and a summary of challenges for the exist-
ing approaches, CLOTH4D makes an essential step toward
more realistic reconstructions of clothed humans and stim-
ulates several exciting future work directions. All in all:
•We contribute CLOTH4D, a large-scale, high-quality,
and open-accessible 4D synthetic dataset for clothed human
reconstruction.
•We introduce a series of temporally-aware metrics to
evaluate the reconstructed performance in the aspect of tem-
poral consistency.
•With the proposed dataset and metrics, we thoroughly
analyze the pros and cons of SOTAs, summarize the existing
challenges toward more realistic 3D modeling, and propose
potential new directions.
12848
obtaining 3D clothing and clothing simulation results (i) pattern making
obtaining 3D avatar with animation
(ii) clothed 3D avatar
(iii) clothing simulation
CLOTH4D with varied 3D sequences
Figure 2. Pipeline for creating instances in CLOTH4D, which primarily adopts CLO for clothing design and simulation, Mixamo for
animation, and Blender for processing and exporting meshes.
|
Zhou_Procedure-Aware_Pretraining_for_Instructional_Video_Understanding_CVPR_2023 | Abstract
Our goal is to learn a video representation that is useful
for downstream procedure understanding tasks in instruc-
tional videos. Due to the small amount of available an-
notations, a key challenge in procedure understanding is
to be able to extract from unlabeled videos the procedu-
ral knowledge such as the identity of the task (e.g., ‘make
latte’), its steps (e.g., ‘pour milk’), or the potential next
steps given partial progress in its execution. Our main in-
sight is that instructional videos depict sequences of steps
that repeat between instances of the same or different tasks,
and that this structure can be well represented by a Proce-
dural Knowledge Graph ( PKG), where nodes are discrete
steps and edges connect steps that occur sequentially in
the instructional activities. This graph can then be used
to generate pseudo labels to train a video representation
that encodes the procedural knowledge in a more accessi-
ble form to generalize to multiple procedure understand-
ing tasks. We build a PKGby combining information from
a text-based procedural knowledge database and an unla-
beled instructional video corpus and then use it to gener-
ate training pseudo labels with four novel pre-training ob-
jectives. We call this PKG-based pre-training procedure
and the resulting model Paprika ,Procedure- Aware PRe-
training for Instructional Knowledge Acquisition. We eval-
uate Paprika on COIN and CrossTask for procedure un-
derstanding tasks such as task recognition, step recogni-
tion, and step forecasting. Paprika yields a video rep-
resentation that improves over the state of the art: up to
11.23%gains in accuracy in 12evaluation settings. Im-
plementation is available at https://github.com/
salesforce/paprika .
| 1. Introduction
Instructional videos depict humans demonstrating how
to perform multi-step tasks such as cooking, making up
and embroidering, repairing, or creating new objects. For
a holistic instructional video understanding, an agent has to
acquire procedural knowledge : structural information about
(a) Procedure Knowledge Graph BuildingInstructional VideosWhat task?What step?What next step?Procedure-Aware Model(b) Pre-trainingSupervision...Downstream Procedural TasksFigure 1. Training a video representation for procedure un-
derstanding with supervision from a procedural knowledge
graph : the structure observed in instructions for procedures (from
text, from videos) corresponds to sequences of steps that repeat
between instances of the same or different tasks; this structure is
well represented by a Procedural Knowledge Graph ( PKG). (a) We
build a PKGcombining text instructions with unlabeled video data,
and (b) obtain a video representation by encoding the human pro-
cedural knowledge from the PKGinto a more general procedure-
aware model ( Paprika ) generating pseudo labels with the PKG
for several procedure understanding objectives. Paprika can
then be easily applied to multiple downstream procedural tasks.
tasks such as the identification of the task, its steps, or fore-
casting the next steps. An agent that has acquired procedu-
ral knowledge is said to have gained procedure understand-
ingof instructional videos, which can be then exploited in
multiple real-world applications such as instructional video
labeling, video chapterization, process mining and, when
connected to a robot, robot task planning.
Our goal is to learn a novel video representation that can
be applicable to a variety of procedure understanding tasks
in instructional videos. Unfortunately, prior methods for
video representation learning are inadequate for this goal, as
they lack the ability to capture procedural knowledge. This
is because most of them are trained to learn the (weak) cor-
respondence between visual and text modalities, where the
text comes either from automatic-speech recognition (ASR)
on the audio [ 43,77], which is noisy and error-prone, or
from a caption-like descriptive sentence (e.g., “a video of
a dog”) [ 33], which does not contain sufficient informa-
tion for fine-grained procedure understanding tasks such as
step recognition or anticipation. Others are pre-trained on
masked frame modeling [ 34], frame order modeling [ 34]
or video-audio matching [ 1], which gives them basic video
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10727
spatial, temporal or multimodal understanding but is too
generic for procedure understanding tasks.
Closer to our goal, Lin et al. [ 38] propose a video foun-
dation model for procedure understanding of instructional
videos by matching the videos’ ASR transcription (i.e., sub-
title/narration) to procedural steps from a text procedural
knowledge database (wikiHow [ 30]) and training the video-
representation-learning model to match each part of an in-
structional video to the corresponding step. Their method
only acquires isolated step knowledge in pre-training and is
not as suitable to gain sophisticated procedural knowledge.
We propose Paprika , from Procedure- Aware PRe-
training for Instructional Knowledge Acquisition, a method
to learn a novel video representation that encodes procedu-
ral knowledge ( Fig.1). Our main insight is that the structure
observed in instructional videos corresponds to sequences
of steps that repeat between instances of the same or differ-
ent tasks. This structure can be captured by a Procedural
Knowledge Graph ( PKG) where nodes are discretized steps
annotated with features, and edges connect steps that occur
sequentially in the instructional activities. We build such a
graph by combining the text and step information from wik-
iHow and the visual and step information from unlabeled
instructional video datasets such as HowTo100M [ 45] auto-
matically. The resulting graph encodes procedural knowl-
edge about tasks and steps, and about the temporal order
and relation information of steps.
We then train our Paprika model on multiple pre-
training objectives using the PKGto obtain the training la-
bels. The proposed four pre-training objectives ( Sec. 3.3)
respectively focuses on procedural knowledge about the
step of a video, tasks that a step may belong to, steps that
a task would require, and the general order of steps. These
pre-training objectives are designed to allow a model to an-
swer questions about the subgraph of the PKGthat a video
segment may belong to. The PKGproduces pseudo labels
for these questions as supervisory signals to adapt video
representations produced by a video foundation model [ 9]
for robust and generalizable procedure understanding.
Our contributions are summarized as follows:
(i)We propose a Procedural Knowledge Graph ( PKG) that
encodes human procedural knowledge from collectively
leveraging a text procedural knowledge database (wikiHow)
and an unlabeled instructional video corpus (HowTo100M).
(ii)We propose to elicit the knowledge in the PKGinto
Paprika , a procedure-aware model, using four pre-
training objectives. To that end, we produce pseudo lables
with the PKG that serve as supervisory signals to train
Paprika to learn to answer multiple questions about the
subgraph of the PKGthat a video segment may belong to.
(iii)We evaluate our method on the challenging COIN and
CrossTask datasets on downstream procedure understand-
ing tasks: task recognition, step recognition, and step fore-casting. Regardless of the capacity of the downstream
model (from simple MLP to the powerful Transformer), our
method yields a representation that outperforms the state of
the art – up to 11.23%gains in accuracy out of 12evalua-
tion settings.
|
Zhou_The_Treasure_Beneath_Multiple_Annotations_An_Uncertainty-Aware_Edge_Detector_CVPR_2023 | Abstract
Deep learning-based edge detectors heavily rely on
pixel-wise labels which are often provided by multiple an-notators. Existing methods fuse multiple annotations usinga simple voting process, ignoring the inherent ambiguity ofedges and labeling bias of annotators. In this paper , we
propose a novel uncertainty-aware edge detector (UAED),
which employs uncertainty to investigate the subjectivityand ambiguity of diverse annotations. Specifically, we firstconvert the deterministic label space into a learnable Gaus-sian distribution, whose variance measures the degree of
ambiguity among different annotations. Then we regard the
learned variance as the estimated uncertainty of the pre-dicted edge maps, and pixels with higher uncertainty arelikely to be hard samples for edge detection. Therefore wedesign an adaptive weighting loss to emphasize the learn-ing from those pixels with high uncertainty, which helpsthe network to gradually concentrate on the important pix-els. UAED can be combined with various encoder-decoderbackbones, and the extensive experiments demonstrate thatUAED achieves superior performance consistently acrossmultiple edge detection benchmarks. The source code isavailable at https://github.com/ZhouCX117/
UAED .
| 1. Introduction
Edge detection is a fundamental low-level vision task. It
greatly reduces irrelevant information and retains the mostimportant structural attributes. An efficient edge detectorcan generate structural edges that depict important areas
from a whole image, thereby benefiting many downstreamtasks [ 31,37,42,50,63]. Early pioneering methods [ 4,26]
compute the gradient and choose suitable thresholds to se-lect pixels with obvious brightness changes. Hand-crafted
*Corresponding author.
a
c d
efb
Figure 1. Illustration of the proposed Uncertainty-Aware Edge De-
tector (UAED). The first row shows (a) an image from the BSDStest set and (b) four diverse labels by different annotators. Thesecond row shows (c) the final edge label computed by majorityvoting and (d) our estimated uncertainty map (red means high un-certainty and blue means low uncertainty). The third row showsthe edge detection results by (e) EDTER [ 41] and (f) our UAED,
both processed by non-maximum suppression.
feature based methods [ 1,35] extract features from low-
level cues including density and texture, and then designcomplex rules to distinguish edges. Benefiting from thepowerful feature representation of Convolution Neural Net-work (CNN) and Transformer, recent works [ 16,32,41,59]
concentrate on designing elaborate network architectures tolearn high-level semantic representations.
The previous efforts are mainly dedicated to designing
advanced networks to extract distinctive features. Exceptfor the well-designed models, precise pixel-level annota-
tion is another key factor in building an efficient edge de-tector under the supervised setting. Due to the complex-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15507
ity of the scenes and the ambiguity of the edges, most of
the works [ 1,36] involve multiple annotators for labeling
edges. However, the subjectivity of the annotators, e.g., dif-
ferent people may perceive the same scene differently andannotate the edges at different granularities, leading to in-consistent annotations (Fig. 1(b)). Previous methods simply
utilize the majority voting strategy to fuse multiple anno-tations into single ground truth, where all annotations areaveraged to generate an edge probability map (Fig. 1(c)),
ranging from 0 to 1. During training, the pixels with proba-bility higher than a fixed threshold are regarded as positiveand the pixels with probability equal to 0 as negative. Andthe remaining pixels are dropped. Such a simple voting pro-cess neglects the inherent ambiguity and label bias causedby the labeling process.
To address the issues, in this paper, we propose a novel
uncertainty-aware edge detection (UAED) framework thatconverts the deterministic labels into distributions to ex-
plore the inherent label ambiguity in the edge detection task.Unlike previous works that focus on architecture modifi-cation, we target modeling the uncertainty underlying the
multiple edge annotations.
Specifically, the proposed UAED is designed based on
the encoder-decoder architecture, where the encoder gen-erates the feature representations followed by two separatedecoders. Instead of using fixed labels, we treat the predic-tion as a learnable Gaussian distribution, whose mean andvariance are learned by two decoders respectively, and thevariance can be supervised by multiple annotations. Thelearned variance can be naturally regarded as uncertainty,which measures the label ambiguity. Therefore we furtherutilize the learned uncertainty to boost the performance.Fig. 1(d) shows the estimated uncertainty map. We can ob-
serve that the uncertainties of pixels that are close to edgesare much higher than those of smooth regions. This phe-nomenon suggests that pixels with higher uncertainty arevisually more important than pixels with lower uncertainty
and can be regarded as hard samples for detecting edges.
Thus inspired, unlike most uncertainty estimation meth-ods that regard the pixels with higher uncertainty as unre-liable and discard them, we encourage the model to learnmore from the hard samples with higher uncertainty pro-gressively. The experiments on two popular edge detectiondatasets with multiple annotations show the effectiveness ofour proposed method. Compared with transformer-basedEDTER [ 41] (Fig. 1(e)), our proposed UAED combined
with CNN-based architecture can generate more detailededges (Fig. 1(f)), while requires less computation resource
and time. Our contributions can be summarized as follows:
• We propose an uncertainty-aware edge detector,
named UAED, which captures the inherent ambiguitycaused by multiple subjective annotations. To our bestknowledge, this is the first work that provides an un-certainty perspective in edge detection.
• We concentrate on the pixels with higher uncertainty
that play a more important role in edge detection, andfurther design an adaptive weighting loss to emphasizethe training from those hard pixels.
• UAED can be combined with various encoder-decoder
backbones without increasing much computation bur-den. We conduct comprehensive experiments on pop-ular datasets across different model architectures and
achieve consistent improvement.
|
Zhou_Exploring_Motion_Ambiguity_and_Alignment_for_High-Quality_Video_Frame_Interpolation_CVPR_2023 | Abstract
For video frame interpolation (VFI), existing deep-
learning-based approaches strongly rely on the ground-
truth (GT) intermediate frames, which sometimes ignore the
non-unique nature of motion judging from the given adja-
cent frames. As a result, these methods tend to produce
averaged solutions that are not clear enough. To alleviate
this issue, we propose to relax the requirement of recon-
structing an intermediate frame as close to the GT as possi-
ble. Towards this end, we develop a texture consistency loss
(TCL) upon the assumption that the interpolated content
should maintain similar structures with their counterparts
in the given frames. Predictions satisfying this constraint
are encouraged, though they may differ from the prede-
fined GT. Without the bells and whistles, our plug-and-play
TCL is capable of improving the performance of existing
VFI frameworks consistently. On the other hand, previous
methods usually adopt the cost volume or correlation map
to achieve more accurate image or feature warping. How-
ever, the O(N2)(Nrefers to the pixel count) computational
complexity makes it infeasible for high-resolution cases. In
this work, we design a simple, efficient O(N)yet power-
ful guided cross-scale pyramid alignment (GCSPA) module,
where multi-scale information is highly exploited. Exten-
sive experiments justify the efficiency and effectiveness of
the proposed strategy.
| 1. Introduction
Video frame interpolation (VFI) plays a critical role in
computer vision with numerous applications, such as video
editing and novel view synthesis. Unlike other vision tasks
that heavily rely on human annotations, VFI benefits from
the abundant off-the-shelf videos to generate high-quality
training data. The recent years have witnessed the rapid
development of VFI empowered by the success of deep
neural networks. The popular approaches can be roughly
divided into two categories: 1) optical-flow-based meth-
*Corresponding authorods [1, 8, 15–17, 20, 26–28, 30, 31, 39, 44–46, 49, 51, 53] and
2) kernel-regression -based algorithms [4–6, 22, 32, 33, 37].
The optical-flow-based methods typically warp the im-
ages/features based on a linear or quadratic motion model
and then complete the interpolation by fusing the warped
results. Nevertheless, it is not flexible enough to model
the real-world motion under the linear or quadratic as-
sumption, especially for cases with long-range correspon-
dence or complex motion. Besides, occlusion reasoning is a
challenging problem for pixel-wise optical flow estimation.
Without the prerequisites above, the kernel-based methods
handle the reasoning and aggregation in an implicit way,
which adaptively aggregate neighboring pixels from the im-
ages/features to generate the target pixel. However, this line
stands the chance of failing to tackle the high-resolution
frame interpolation or large motion due to the limited recep-
tive field. Thereafter, deformable convolutional networks,
a variant of kernel-based methods, are adopted to aggre-
gate the long-term correspondence [5, 7, 22], achieving bet-
ter performance. Despite many attempts, some challenging
issues remain unresolved.
First, the deep-learning-based VFI works focus on learn-
ing the predefined ground truth (GT) and ignore the inherent
motion diversity across a sequence of frames. As illustrated
in Fig. 1 (a), given the positions of a ball in frames I−1and
I1, we conduct a user study of choosing its most possible
position in the intermediate frame I0. The obtained proba-
bility distribution map clearly clarifies the phenomenon of
motion ambiguity in VFI. Without considering this point,
existing methods that adopt the pixel-wise L1 or L2 supervi-
sion possibly generate blurry results, as shown in Fig. 1 (b).
To resolve this problem, we propose a novel texture con-
sistency loss (TCL) that relaxes the rigid supervision of GT
while ensuring texture consistency across adjacent frames.
Specifically, for an estimated patch, apart from the prede-
fined GT, we look for another texture-matched patch from
the input frames as a pseudo label to jointly optimize the
network. In this case, predictions satisfying the texture con-
sistency are also encouraged. From the visualization com-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22169
Input (Overlay)
[33 ] w/o TCL
[33 ] w/ TCL
Input (Overlay) Ours w/o TCL Ours w/ TCL
(b)
PSNR (dB) [33] w/o TCL
[33 ] w/ TCL Ours w/o TCL
Ours w/ TCL
Vimeo-Triplets-Test Middlebury 1I-
(a) (c) 0I
1I
33.81 34.77 36.56 36.85
35.73 37.25 38.69 38.83
1I-
0I0
?
1I1
0 0.01
0.01 0.01 0.01 0.01 0.01 0.01 0.04 0.06
0.13
0.11 0.05
0.26 0.28
0Figure 1. Analysis of motion ambiguity in VFI. (a) User study of querying the location of a ball in the intermediate frame I0with
the observed two input frames {I−1, I1}. The results are visualized in a probability distribution map. (b) Visual comparison between
SepConv [33] and our method with/without the proposed texture consistency loss (TCL). (c) Quantitative evaluation of the two methods
with/without TCL loss on Vimeo-Triplets [47] and Middlebury [2] benchmarks.
parison of SepConv [33] and our model with/without TCL1
in Fig. 1 (b), we observe that the proposed TCL leads to
clearer results. Besides, as shown in Fig. 1 (c), it is seen
that our TCL brings about considerable PSNR improvement
on Vimeo-Triplets [47] and Middlebury [2] benchmarks for
both two methods. More visual examples are available in
our supplementary materials.
Second, the cross-scale aggregation during alignment is
not fully exploited in VFI. For example, PDWN [5] con-
ducts an image-level warping using the gradually refined
offsets. However, the single-level alignment may not take
full advantage of the cross-scale information, which has
been proven useful in many low-level tasks [23, 25, 52].
To address this issue, some recent works [5, 13, 31] have
considered multi-scale representations for VFI. Feflow [31]
adopts a PCD alignment proposed by EDVR [42] that per-
forms a coarse-to-fine aggregation for long-range motion
estimation. Specifically, the fusion of image features is
conducted at two adjacent levels, without considering dis-
tant cross-scale aggregation. In this work, we propose a
novel guided cross-scale pyramid alignment (GCSPA) mod-
ule, which performs bidirectional temporal alignment from
low-resolution stages to higher ones. In each step, the previ-
ously aligned low-scale features are regarded as a guidance
for the current-level warping. To aggregate the multi-scale
information, we design an efficient fusion strategy rather
than building the time-consuming cost volume or corre-
lation map. Extensive quantitative and qualitative exper-
iments verify the effectiveness and efficiency of the pro-
posed method.
In a nutshell, our contributions are three-fold:
•Texture consistency loss : Inspired by the motion am-
biguity in VFI, we design a novel texture consistency
loss to allow the diversity of interpolated content, pro-
ducing clearer results.
1The four models are trained on Vimeo-Triplets [47] dataset.•Guided cross-scale pyramid alignment : The pro-
posed alignment strategy utilizes the multi-scale infor-
mation to conduct a more accurate and robust motion
compensation while requiring few computational re-
sources.
•State-of-the-art performance : The extensive experi-
ments including frame interpolation and extrapolation
have demonstrated the superior performance of the
proposed algorithm.
|
Zhao_Minimizing_Maximum_Model_Discrepancy_for_Transferable_Black-Box_Targeted_Attacks_CVPR_2023 | Abstract
In this work, we study the black-box targeted attack prob-
lem from the model discrepancy perspective. On the the-
oretical side, we present a generalization error bound for
black-box targeted attacks, which gives a rigorous theoreti-
cal analysis for guaranteeing the success of the attack. We
reveal that the attack error on a target model mainly de-
pends on empirical attack error on the substitute model and
the maximum model discrepancy among substitute models.
On the algorithmic side, we derive a new algorithm for
black-box targeted attacks based on our theoretical analy-
sis, in which we additionally minimize the maximum model
discrepancy (M3D) of the substitute models when training
the generator to generate adversarial examples. In this way,
our model is capable of crafting highly transferable ad-
versarial examples that are robust to the model variation,
thus improving the success rate for attacking the black-box
model. We conduct extensive experiments on the ImageNet
dataset with different classification models, and our pro-
posed approach outperforms existing state-of-the-art meth-
ods by a significant margin. The code will be available at
https://github.com/Asteriajojo/M3D.
| 1. Introduction
Recently, researchers have shown that Deep Neural Net-
works (DNNs) are highly vulnerable to adversarial exam-
ples [9, 28, 36]. It has been demonstrated that by adding
small and human-imperceptible perturbations, images can
be easily misclassified by deep-learning models. Even
worse, adversarial examples are shown may have transfer-
ability ,i.e., adversarial examples generated by one model
can successfully attack another model with a high prob-
ability [23, 28, 37]. Consequently, there is an increasing
interest in developing new techniques to attack an unseen
black-box model by constructing adversarial examples on
*The corresponding authora substitute model, which is also known as black-box at-
tack [5, 6, 14–16, 20, 39, 40].
While almost all existing black-box attack works implic-
itly assume the transferability of adversarial examples, the
theoretical analysis of the transferability is still absent. To
this end, in this work, we aim to answer the question of
to what extent the adversarial examples generated on one
known model can be used to successfully attack another un-
seen model. In particular, we are specifically interested in
the targeted attack task, i.e., constructing adversarial exam-
ples that can mislead the unseen black-box model by out-
putting a highly dangerous specified class. We first present
a generalization error bound for black-box targeted attacks
from the model discrepancy perspective, in which we reveal
that the attack error on a target model depends on the attack
error on a substitute model andthe model discrepancy be-
tween the substitute model and the black-box model . Fur-
thermore, the latter term can be bounded by the maximum
model discrepancy on the underlying hypothesis set, which
is irrelevant to the unseen target model, making it possi-
ble to construct adversarial examples by directly minimiz-
ing this term and thus the generalization error.
Based on the generalization error bound, we then design
a novel method called Minimizing Maximum Model Dis-
crepancy (M3D) attack to produce highly transferable per-
turbations for black-box targeted attack. Specifically, we
exploit two substitute models which are expected to main-
tain their model discrepancy as large as possible. At the
same time, we train a generator that takes an image as in-
put and generates an adversarial example to attack these two
substitute models and simultaneously minimize the discrep-
ancy between the two substitute models. In other words,
the generator and the two substitute models are trained in
an adversarial manner to play a min-max game in terms of
the model discrepancy. In this way, the generator is ex-
pected to generate adversarial examples that are robust to
the variation of the substitute models, thus being capable
of attacking the black-box target model successfully with a
high chance.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8153
We conduct extensive experiments on the ImageNet
dataset using different benchmark models, where our M3D
approach outperforms state-of-the-art methods by a signifi-
cant margin on a wide range of attack settings. Especially,
we show impressive improvements in the situations when
the black-box model has a large model discrepancy from the
substitute model, such as attacking the ResNet [11] model
by crafting adversarial examples on a VGG [35] model. The
main contributions of this paper are as follows:
• We present a generalization error bound for black-box
targeted attacks based on the model discrepancy per-
spective.
• We design a novel generative approach called Mini-
mizing Maximum Model Discrepancy (M3D) attack
to craft adversarial examples with high transferability
based on the generalization error bound.
• We demonstrate the effectiveness of our method by
strong empirical results, where our approach outper-
forms the state-of-art methods by a significant margin.
|
Zhu_Occlusion-Free_Scene_Recovery_via_Neural_Radiance_Fields_CVPR_2023 | Abstract
Our everyday lives are filled with occlusions that we
strive to see through. By aggregating desired background
information from different viewpoints, we can easily elim-
inate such occlusions without any external occlusion-free
supervision. Though several occlusion removal methods
have been proposed to empower machine vision systems
with such ability, their performances are still unsatisfactory
due to reliance on external supervision. We propose a
novel method for occlusion removal by directly building
a mapping between position and viewing angles and the
corresponding occlusion-free scene details leveraging Neu-
ral Radiance Fields (NeRF). We also develop an effective
scheme to jointly optimize camera parameters and scene
reconstruction when occlusions are present. An additional
depth constraint is applied to supervise the entire optimiza-
tion without labeled external data for training. The exper-
imental results on existing and newly collected datasets
validate the effectiveness of our method. Our project page:
https://freebutuselesssoul.github.io/occnerf .
| 1. Introduction
Neural Radiance Fields (NeRF) are capable of learning
the scene representation implicitly from a set of 2D images,
yet not every scene is favored by observers. Many unde-
sirable occlusions in our world obscure details that are es-
sential to our understanding of the world. In general, such
obstructions range from water droplets and scribbles on a
piece of glass, to fences or any objects occluding the de-
sired scenes ( e.g., a statue closer to the camera in a land-
mark scene). How to apply computational methods to ex-
clude them from the scene representation is of great interest.
Occlusion removal ( e.g., [29]) is the direct solution to
achieve this goal. However, explicit occlusion removal may
*Corresponding author.oversmooth essential details necessary for clearly observ-
ing the desired background scenes. In addition, current
methods mainly depend on external occlusion-free super-
vision ( e.g., fence removal [4], raindrop removal [22]) to
develop the reliable capability in removing certain types of
occlusions. Once encountering a new scenario with unseen
occlusion types beyond their training data, these methods
might show degraded performances. To handle more di-
verse types of occlusions, generic constraints from multiple
viewpoints are widely adopted [4, 10, 12, 14, 19, 27, 29] via
mimicking our human vision systems, who can easily piece
together the desired background scenes by looking at them
from different viewpoints. But the majority of these meth-
ods just consider viewpoints as a prior in relation to spatial
correlations. Their backbones still rely on external train-
ing data with corresponding ground truth for optimization,
which still does not fundamentally alleviate the difficulty of
handling diverse occlusions in the real world.
An occlusion-free world can be progressively aggregated
by seeing its occluded part from different viewing directions
to reveal occlusions previously unobservable in each sin-
gle perspective, as illustrated in the left part of Fig. 1 (the
fan is occluded in the target view). Since NeRF [18] em-
ploys an implicit representation to map viewpoints to pix-
els, one may come to the naive solution of directly con-
structing a NeRF which is optimized across multiple view-
points. However, the vanilla NeRF [18] representing the
scene as a whole is not able to treat occlusion and back-
ground scenes distinctively, and as long as the occlusion
remains static, NeRF is designed to faithfully reconstruct
its presence. Meanwhile, many NeRF variants can de-
compose the whole scene into different components ( e.g.,
NeRF-W [17], Ha-NeRF [3], NeRFReN [7]), but they can-
not handle the real-world static occlusions. This is because
NeRF-W [17] and Ha-NeRF [3] rely on the inconsistency
of undesired components across different views to achieve
such separation, which is difficult to be observed in a con-
tinuous 3D world. On the other hand, NeRFReN [7] only
works in separating the transmission and reflection compo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20722
nents caused by semi-transparent planar glass, which is in-
capable of handling opaque occlusions in the real world.
Another problem comes from NeRF’s reliance on cam-
era parameters pre-computed by COLMAP [23], be-
cause handcrafted features extracted and matched using
COLMAP [23] are for the whole scene, and are inca-
pable in distinguishing between undesired occlusions and
the desired background. When the features from occlu-
sion dominate the matching process, the obtained camera
parameters cannot faithfully model the spatial correlation
of background scenes across multiple viewpoints. Besides,
COLMAP [23] is not a stable option for pose estimation in
the real world [28]. The existence of occlusions may pre-
vent it from working properly, making the occlusion-free
scene representation infeasible.
In this paper, we aim at seeing through the occluded
scenes by developing an occlusion-free scene representa-
tion without considering specific occlusion types, based on
which we can render any occlusion-free images from de-
sired viewpoints. Our method first maps viewing angles
and their corresponding scene details by leveraging NeRF.
We then introduce a depth constraint to probe the occluded
areas by measuring the depth of occlusion and background,
by assuming that occlusions are always in the foreground
with closer distance . During the scene modeling process, a
pose refinement scheme is further introduced to refine the
camera pose with the features of the background scene. As
outlined in Fig. 1, our pipeline contains three modules to
achieve the above goals: 1) a scene reconstruction module
to represent the whole scene using NeRF (with occlusions),
2) a cost volume construction module to gather information
from neighboring views as guidance (to indicate where oc-
clusions are), and 3) a selective supervision scheme to con-
strain another NeRF on the desired background information
(occlusions removed), and our contributions can be summa-
rized as follows:
• an occlusion-free representation without relying on
any external prior as supervisory knowledge;
• a joint optimization of pose refinement and scene re-
construction by effective multi-view feature fusion;
• a selective supervision scheme to probe the occluded
areas guided by the scene depth information.
Based on the experiments with a dataset containing diverse
types of occlusions, the proposed method can eliminate oc-
clusions including scribbles and water droplets on a piece
of glass, fences, and even irregular-shaped statues without
relying on any external supervisions.
|
Zhao_Streaming_Video_Model_CVPR_2023 | Abstract
Video understanding tasks have traditionally been mod-
eled by two separate architectures, specially tailored for
two distinct tasks. Sequence-based video tasks, such as ac-
tion recognition, use a video backbone to directly extract
spatiotemporal features, while frame-based video tasks,
such as multiple object tracking (MOT), rely on single fixed-
image backbone to extract spatial features. In contrast, we
propose to unify video understanding tasks into one novel
streaming video architecture, referred to as Streaming Vi-
sion Transformer (S-ViT). S-ViT first produces frame-level
features with a memory-enabled temporally-aware spatial
encoder to serve the frame-based video tasks. Then the
frame features are input into a task-related temporal de-
*This work was done during the internship of Yucheng at MSRA.
†Corresponding author.coder to obtain spatiotemporal features for sequence-based
tasks. The efficiency and efficacy of S-ViT is demonstrated
by the state-of-the-art accuracy in the sequence-based ac-
tion recognition task and the competitive advantage over
conventional architecture in the frame-based MOT task. We
believe that the concept of streaming video model and the
implementation of S-ViT are solid steps towards a unified
deep learning architecture for video understanding. Code
will be available at https://github.com/yuzhms/
Streaming-Video-Model .
| 1. Introduction
As a fundamental research topic in computer vision,
video understanding mainly deals with two types of tasks.
The sequence-based [9, 56] tasks aim to understand what
is happening in a period of time. For example, the action
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14602
Figure 2. Comparison on video modeling paradigm on both the
sequence-based action recognition task and frame-based multi-
ple object tracking task. The proposed streaming model achieves
higher performance than the frame-based model on both tasks
while has no loss compared to clip-based model on the sequence-
based task. The clip-based model can not be directly used in
frame-based tasks.
recognition task classifies the object action in a video se-
quence into a set of predefined categories. The frame-based
tasks [11, 31, 72], on the other hand, aim to look for key
information in a certain point of time in a video. For ex-
ample, the multiple object tracking (MOT) task predicts the
bounding boxes of objects in each video frame. Although
both types of tasks take a video as input, they are handled
very differently in computer vision research.
The different treatment of these two types of tasks is
mainly reflected in the type of backbone network used.
The action recognition task is usually handled by a clip-
based architecture, where a video model [1], which takes a
video clip as input and outputs spatiotemporal features, is
used. In the video object segmentation (VOS), video ob-
ject detection (VOD), and multiple object tracking (MOT)
tasks, however, a frame-based architecture [14, 21] is of-
ten adopted. The frame-based architecture employs image
backbone to generate independent spatial features for each
frame. In most tracking-by-detection MOT solutions, these
features are directly used as the input to the object detector.
Both types of treatment have their respective drawbacks.
On the one hand, the clip-based architecture processes a
group of video frames at one time, which puts great pressure
on the processor’s memory space and processing power. As
a result, it is difficult to handle long videos or long actions
effectively. In addition, the summarized spatiotemporal fea-
tures extracted by a video backbone usually lack sufficient
spatial resolution to be used for dense prediction tasks. On
the other hand, the frame-based architecture does not con-
sider surrounding frames in the process of spatial feature
extraction. As a result, the features do not contain any tem-
poral information or an out-of-band mechanism is in needto gather additional temporal information. We believe that a
video frame should be treated differently from a single im-
age and that temporal-aware spatial features are more pow-
erful for solving frame-based video understanding tasks.
In this paper, we propose a unified architecture to han-
dle both types of video tasks. The proposed streaming
video model, as shown in Fig.1, circumvents the draw-
backs of the conventional treatment by a two-stage de-
sign. Specifically, it is composed of a temporal-aware
spatial encoder, which extracts temporal-aware spatial fea-
ture for each video frame, and a task-related temporal de-
coder, which transfers frame-level features to task-specific
outputs for sequence-based tasks. When compared with
frame-based architecture, the temporal-aware spatial en-
coder in streaming video model leverages additional infor-
mation from past frames, so that it has potential to obtain
more powerful and robust features. When compared with
clip-based architecture, our model disentangles the frame-
level feature extraction and clip-level feature fusion, so as
to alleviate the computation pressure while enabling more
flexible use scenarios, such as long-term video inference or
online video inference.
We instantiate such a streaming video model by building
the streaming video Transformer (S-ViT) based on the vi-
sion Transformer [14]. S-ViT is featured by self-attention
within a frame to extract spatial information and cross-
attention across frames to make the fused feature temporal-
aware. Specifically, for the first frame of a video, S-ViT
extracts exactly the same spatial feature as a standard im-
age ViT, but it stores keys and values of every Trans-
former layer in a memory. For subsequent frames in a
video, both intra-frame self-attention and inter-frame cross-
attention [54] with the stored memory is calculated. S-ViT
borrows ideas from triple 2D (T2D) decomposition [74]
and limits the cross-attention region within patches with the
same horizontal or vertical positions. This decomposition
reduces the computational cost and allows S-ViT to handle
long histories. The output of this stage can directly be used
by the frame-based video tasks. For sequence-based tasks,
an additional temporal decoder, implemented by a temporal
Transformer, is used to gather information from multiple
frames.
We evaluate out S-ViT model on two downstream tasks.
The first task is the sequence-based action recognition. We
get 84.7% top-1 accuracy on Kinetics-400 [23] dataset and
69.3% top-1 accuracy on Something-Something v2 [20]
dataset, which is on par with the state-of-the-art, but at
a reduced computation expenditure. The second task is
MOT, which operates on video frames in a widely adopted
tracking-by-detection framework. We show that introduc-
ing temporal-aware spatial encoder creates comparative ad-
vantage over a frame-based architecture under a fair setting
on MOT17 [40] benchmark.
14603
We summarize the contributions as follows. First, we
propose a unified architecture, named streaming video
model, for both frame-based and sequence-based video un-
derstanding tasks. Second, we implement a T2D-based
streaming video Transformer and demonstrate how it can
be used to serve different types of video tasks. Third, ex-
periments on action recognition and MOT tasks show that
our unified model could achieve state-of-the-art results on
both types of tasks. We believe that the work presented in
this paper is a solid step towards a universal video process-
ing architecture.
|
Zheng_LayoutDiffusion_Controllable_Diffusion_Model_for_Layout-to-Image_Generation_CVPR_2023 | Abstract
Recently, diffusion models have achieved great success
in image synthesis. However, when it comes to the layout-
to-image generation where an image often has a complex
scene of multiple objects, how to make strong control over
both the global layout map and each detailed object remains
a challenging task. In this paper, we propose a diffusion
model named LayoutDiffusion that can obtain higher gen-
eration quality and greater controllability than the previous
works. To overcome the difficult multimodal fusion of im-
age and layout, we propose to construct a structural image
patch with region information and transform the patched
image into a special layout to fuse with the normal lay-
out in a unified form. Moreover, Layout Fusion Module
(LFM) and Object-aware Cross Attention (OaCA) are pro-
posed to model the relationship among multiple objects and
designed to be object-aware and position-sensitive, allow-
*Equal contribution.
†Corresponding author.ing for precisely controlling the spatial related information.
Extensive experiments show that our LayoutDiffusion out-
performs the previous SOTA methods on FID, CAS by rela-
tively 46.35 %, 26.70 %on COCO-stuff and 44.29 %, 41.82 %
on VG. Code is available at https://github.com/
ZGCTroy/LayoutDiffusion .
| 1. Introduction
Recently, the diffusion model has achieved encouraging
progress in conditional image generation, especially in text-
to-image generation such as GLIDE [24], Imagen [31], and
Stable Diffusion [30]. However, text-guided diffusion mod-
els may still fail in the following situations. As shown in
Fig. 1 (a), when aiming to generate a complex image with
multiple objects, it is hard to design a prompt properly and
comprehensively. Even input with well-designed prompts,
problems such as missing objects and incorrectly generat-
ing objects’ positions, shapes, and categories still occur in
the state-of-the-art text-guided diffusion model [24, 30, 31].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22490
This is mainly due to the ambiguity of the text and its weak-
ness in precisely expressing the position of the image space.
Fortunately, this is not a problem when using the coarse lay-
out as guidance, which is a set of objects with the annota-
tion of the bounding box (bbox) and object category. With
both spatial and high-level semantic information, the diffu-
sion model can obtain more powerful controllability while
maintaining the high quality.
However, early studies [2, 14, 38, 42] on layout-to-image
generation are almost limited to generative adversarial net-
works (GANs) and often suffer from unstable conver-
gence [1] and mode collapse [27]. Despite the advantages of
diffusion models in easy training [10] and significant qual-
ity improvement [7], few studies have considered applying
diffusion in the layout-to-image generation task. To our
knowledge, only LDM [30] supports the condition of lay-
out and has shown encouraging progress in this field.
In this paper, different from LDM that applies the sim-
ple multimodal fusion method (e.g., the cross attention) or
direct input concatenation for all conditional input, we aim
to specifically design the fusion mechanism between lay-
out and image. Moreover, instead of conditioning only
in the second stage like LDM, we propose an end-to-end
one-stage model that considers the condition for the whole
process, which may have the potential to help mitigate
loss in the task that requires fine-grained accuracy in pixel
space [30]. The fusion between image and layout is a diffi-
cult multimodal fusion problem. Compared to the fusion of
text and image, the layout has more restrictions on the po-
sition, size, and category of objects. This requires a higher
controllability of the model and often leads to a decrease in
the naturalness and diversity of the generated image. Fur-
thermore, the layout is more sensitive to each token and the
loss in token of layout will directly lead to the missing ob-
jects.
To address the problems mentioned above, we propose
treating the patched image and the input layout in a uni-
fied form. Specifically, we construct a structural image
patch at multi-resolution by adding the concept of region
that contains information of position and size. As a re-
sult, each patch of the image is transformed into a special
type of object, and the entire patched image will also be
regarded as a layout. Finally, the difficult problem of multi-
modal fusion between image and layout will be transformed
into a simple fusion with a unified form in the same spatial
space of the image. We name our model LayoutDiffuison,
a layout-conditional diffusion model with Layout Fusion
Module (LFM), object-aware Cross Attention Mechanism
(OaCA), and corresponding classifier-free training and sam-
pling scheme. In detail, LFM fuses the information of each
object and models the relationship among multiple objects,
providing a latent representation of the entire layout. To
make the model pay more attention to the information re-lated to the object, we propose an object-aware fusion mod-
ule named OaCA. Cross-attention is made between the im-
age patch feature and layout in a unified coordinate space
by representing the positions of both of them as bounding
boxes. To further improve the user experience of LayoutD-
iffuison, we also make several optimizations on the speed of
the classifier-free sampling process and could significantly
outperform the SOTA models in 25 iterations.
Experiments are conducted on COCO-stuff [5] and Vi-
sual Genome (VG) [19]. Various metrics ranging from qual-
ity, diversity, and controllability show that LayoutDiffusion
significantly outperforms both state-of-the-art GAN-based
and diffusion-based methods.
Our main contribution is listed below.
• Instead of using the dominated GAN-based methods,
we propose a diffusion model named LayoutDiffusion
for layout-to-image generations, which can generate
images with both high-quality and diversity while
maintaining precise control over the position and size
of multiple objects.
• We propose to treat each patch of the image as a special
object and accomplish the difficult multimodal fusion
of layout and image in a unified form. LFM and OaCA
are then proposed to fuse the multi-resolution image
patches with user’s input layout.
• LayoutDiffuison outperforms the SOTA layout-to-
image generation method on FID, DS, CAS by rela-
tively around 46.35 %, 9.61 %, 26.70 %on COCO-stuff
and 44.29 %, 11.30 %, 41.82 %on VG.
|
Zheng_EditableNeRF_Editing_Topologically_Varying_Neural_Radiance_Fields_by_Key_Points_CVPR_2023 | Abstract
Neural radiance fields (NeRF) achieve highly photo-
realistic novel-view synthesis, but it’s a challenging prob-lem to edit the scenes modeled by NeRF-based methods, es-pecially for dynamic scenes. We propose editable neural
radiance fields that enable end-users to easily edit dynamic
scenes and even support topological changes. Input withan image sequence from a single camera, our network istrained fully automatically and models topologically vary-ing dynamics using our picked-out surface key points. Thenend-users can edit the scene by easily dragging the key
points to desired new positions. To achieve this, we proposea scene analysis method to detect and initialize key points
by considering the dynamics in the scene, and a weightedkey points strategy to model topologically varying dynamicsby joint key points and weights optimization. Our methodsupports intuitive multi-dimensional (up to 3D) editing and
can generate novel scenes that are unseen in the input se-
quence. Experiments demonstrate that our method achieveshigh-quality editing on various dynamic scenes and outper-forms the state-of-the-art. Our code and captured data areavailable at https://chengwei-zheng.github.
io/EditableNeRF/ .
| 1. Introduction
Neural radiance fields (NeRF) [ 23] have shown great
power in novel-view synthesis and enable many applica-tions as this method achieves photo-realistic rendering [ 9].
Recent techniques have further improved NeRF by extend-
ing it to handle dynamic scenes [ 27,30,40] and even topo-
logically varying scenes [ 28]. However, these works mainly
focus on reconstruction itself but do not consider scene edit-ing. Thus, for rendering, only the camera views can bechanged, while the modeled scenes cannot be edited.
Recently, some frameworks have been proposed to make
neural radiance fields editable in different aspects. Some of
them aim to edit the reconstructed appearance and enablerelighting [ 2,35,54]; some allow controlling the shapes and
colors of objects from a specific category [ 15,20,44,47];
and some divide the scene into different parts and the loca-tion of each part can be modified [ 48,49,52]. However, the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8317
dynamics of moving objects cannot be edited by the previ-
ous methods. And this task becomes much more challeng-ing when the dynamics contain topological changes. Topo-logical changes can lead to motion discontinuities (e.g., be-tween the hammer and the piano keys, between the cupsand the table in Fig. 1) in 3D space and further cause notice-
able artifacts if they are not modeled well. A state-of-the-artframework CoNeRF [ 16] tries to resolve this problem by us-
ing manual supervision. However, it only supports limitedand one-dimensional editing for each scene part, requiringuser annotations as supervision.
We propose EditableNeRF, editable topologically vary-
ing neural radiance fields that are trained without manualsupervision and support intuitive multi-dimensional (up tothree-dimensional) editing. The key of our method is to rep-resent motions and topological changes by the movementsof some sparse surface key points. Each key point is ableto control the topologically varying dynamics of a mov-ing part, as well as other effects like shadow and reflectionchanges through the neural radiance fields. This key-point-based method enables end-users to edit the scene by easilydragging the key points to their desired new positions.
To achieve this, we first apply a scene analysis method
to detect key points in the canonical space and track them inthe full sequence for key point initialization. We introduce anetwork to estimate spatially-varying weights for all scenepoints and use the weighted key points to model the dynam-ics in the scene, including topological changes. In the train-ing stage, our network is trained to reconstruct the sceneusing the supervision from the input image sequence, andthe key point positions are also optimized by taking motion(optical flow) and geometry (depth maps) constraints as ad-ditional supervision. After training, the scene can be edited
by controlling the key points’ positions, and novel scenes
that are unseen during training can also be generated.
The contribution of this paper lies in the following as-
pects:
• Key-point-driven neural radiance fields achieving intu-
itive multi-dimensional editing even with topologicalchanges, without requiring annotated training data.
• A weighted key points strategy modeling topologically
varying dynamics by joint key points and weights op-timization.
• A scene analysis method to detect and initialize key
points by considering the dynamics in the scene.
|
Zhao_Learning_Video_Representations_From_Large_Language_Models_CVPR_2023 | Abstract
We introduce LAVILA, a new approach to learning
video-language representations by leveraging Large Lan-
guage Models (LLMs). We repurpose pre-trained LLMs to
be conditioned on visual input, and finetune them to create
automatic video narrators. Our auto-generated narrations
offer a number of advantages, including dense coverage
of long videos, better temporal synchronization of the vi-
sual information and text, and much higher diversity of text.
The video-language embedding learned contrastively with
these narrations outperforms the previous state-of-the-art
on multiple first-person and third-person video tasks, both
in zero-shot and finetuned setups. Most notably, LAVILA
obtains an absolute gain of 10.1% on EGTEA classifica-
tion and 5.9% Epic-Kitchens-100 multi-instance retrieval
benchmarks. Furthermore, LAVILAtrained with only half
the narrations from the Ego4D dataset outperforms models
trained on the full set, and shows positive scaling behavior
on increasing pre-training data and model size.
| 1. Introduction
Learning visual representation using web-scale image-
text data is a powerful tool for computer vision. Vision-
language approaches [ 31,49,80] have pushed the state-of-
the-art across a variety of tasks, including zero-shot classi-
fication [ 49], novel object detection [ 87], and even image
generation [ 52]. Similar approaches for videos [ 4,39,46],
however, have been limited by the small size of paired
video-text corpora compared to the billion-scale image-text
datasets [ 31,49,84]—even though access to raw video data
has exploded in the past decade. In this work, we show it
is possible to automatically generate text pairing for such
videos by leveraging Large Language Models (LLMs), thus
taking full advantage of the massive video data. Learning
video-language models with these automatically generated
annotations leads to stronger representations, and as Fig-
ure1shows, sets a new state-of-the-art on six popular first
*Work done during an internship at Meta.EK-100
Multi-Instance
Retrieval
(nDCG)EK-100
Multi-Instance
Retrieval (mAP)CharadesEgo
Recognition
(mAP)
EgoMCQ
(intra-vid. acc.)
EGTEA
Recognition
(mean acc.)
EK-100 Recog-
nition (top-1
action acc.)
UCF-101
Recognition
(linear probing
mean acc.)HMDB-51
Recognition
(linear probing
mean acc.)
59.4 66.545.050.9 32.136.1
57.263.1
65.9 76.0
50.5
51.082.7
88.154.3
61.5
LAVILA(Ours)
Previous SOTA
Figure 1. LAVILAsets a new state-of-the-art across a number
of first and third-person video understanding tasks ( cf. Table 1for
details), by learning a video-language representation using super-
vision from large language models as narrators.
and third-person video benchmarks.
Our method, called LAVILA: Language-model
augmented Video-Language pre-training, leverages pre-
trained LLMs, e.g. GPT-2 [ 50], which encode within
their weights a treasure trove of factual knowledge and
conversational ability. As shown in Figure 2, we repurpose
these LLMs to be “visually-conditioned narrators”, and
finetune on all accessible paired video-text clips. Once
trained, we use the model to densely annotate thousands
of hours of videos by generating rich textual descriptions.
This pseudo-supervision can thus pervade the entire video,
in between and beyond the annotated snippets. Paired
with another LLM trained to rephrase existing narrations,
LAVILAis able to create a much larger and more diverse
set of text targets for video-text contrastive learning. In
addition to setting a new state-of-the-art as noted earlier,
the stronger representation learned by L AVILAeven
outperforms prior work using only half the groundtruth
annotations (Figure 5).
LAVILA’s strong performance can be attributed to a
number of factors. First, L AVILAcan provide temporally
dense supervision for long-form videos, where the associ-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6586
C walks on the pavement C takes a selfie with the phoneHuman
Narration / ASR
NARRATOR
C operates the phone.C looks around the
open space.
A lady walks past a car. A man walks towards a building A woman converses with C.
timeDual-Encoder Model
LAVILA
Dual-Encoder Model
Large
LMConventional
Video -Language
Representation Learning
Figure 2. LAVILAleverages Large Language Models (LLMs)
to densely narrate long videos, and uses those narrations to train
strong dual-encoder models. While prior work uses sparsely la-
beled text by humans, or weakly aligned text transcribed from
speech, L AVILAis able to leverage dense, diverse, and well-
aligned text generated by a LLM.
ated captions are either too sparse, or the video-level “Alt-
Text” (in the case of web videos) does not describe all the
nuanced activities happening in it. Second, the generated
text is well-aligned with the visual input. Although prior
work has leveraged automatic speech transcription on How-
To videos [ 45] to automatically extract clips paired with
text from the speech, such datasets have relatively poor
alignment between the visual and textual content ( ≤50%,
cf. [25,45]), limiting the quality of the learned represen-
tations. Third, L AVILAcan significantly expand annota-
tions when only a little is available. For instance, videos of
mundane day-to-day activities, especially from an egocen-
tric viewpoint, could be very useful for assistive and aug-
mented reality applications. Such videos, however, are rare
on the internet, and hence do not readily exist with associ-
ated web text. Recent work [ 24] instead opted to manually
capture and narrate such video data. These narrations how-
ever required significant manual effort: 250K hours of an-
notator time spent in narrating 3.6K hours of video. In con-
trast, L AVILAis able to automatically narrate each video
multiple times and far more densely, and hence learns much
stronger representations.
We extensively evaluate L AVILAacross multiple video-
text pre-training datasets and downstream tasks to validate
its effectiveness. Specifically, after being pre-trained on
Ego4D, the largest egocentric video datasets with narra-
tions, L AVILAcan re-narrate the whole dataset 10 ×over.
The resulting model learned on these expanded narrations
sets a new state-of-the-art on a wide range of downstream
tasks across challenging datasets, including multi-instancevideo retrieval on Epic-Kitchens-100 ( 5.9% absolute gain
on mAP), multiple-choice question answering on Ego4D
(5.9% absolute gain on intra-video accuracy), and action
recognition on EGTEA ( 10.1% absolute gain on mean ac-
curacy). It obtains gains both when evaluated for zero-shot
transfer to the new dataset, as well as after fine-tuning on
that dataset. Similar gains are shown in third-person video
data. When training L AVILAafter densely re-narrating
HowTo100M, we outperform prior work on downstream ac-
tion classification on UCF-101 and HMDB-51. In a case
study of semi-supervised learning, we show that our model,
which only ever sees 50% of the human-labeled data, is ca-
pable of outperforming the baseline model trained with all
the narrations. Moreover, the gains progressively increase
as we go to larger data regimes and larger backbones, sug-
gesting the scalability of our method.
|
Zielonka_Instant_Volumetric_Head_Avatars_CVPR_2023 | Abstract
We present Instant Volumetric Head Avatars (INSTA),
a novel approach for reconstructing photo-realistic digi-
tal avatars instantaneously. INSTA models a dynamic neu-
ral radiance field based on neural graphics primitives em-
bedded around a parametric face model. Our pipeline is
trained on a single monocular RGB portrait video that ob-
serves the subject under different expressions and views.
While state-of-the-art methods take up to several days to
train an avatar, our method can reconstruct a digital avatar
in less than 10 minutes on modern GPU hardware, which is
orders of magnitude faster than previous solutions. In ad-
dition, it allows for the interactive rendering of novel poses
and expressions. By leveraging the geometry prior of the
underlying parametric face model, we demonstrate that IN-
STA extrapolates to unseen poses. In quantitative and quali-
tative studies on various subjects, INSTA outperforms state-
of-the-art methods regarding rendering quality and training
time. Project website: https://zielon.github.io/insta/ | 1. Introduction
For immersive telepresence in AR or VR, we aim for
digital humans (avatars) that mimic the motions and facial
expressions of the actual subjects participating in a meet-
ing. Besides the motion, these avatars should reflect the
human’s shape and appearance. Instead of prerecorded, old
avatars, we aim to instantaneously reconstruct the subject’s
look to capture the actual appearance during a meeting. To
this end, we propose Instant V olumetric Head Avatars (IN-
STA), which enables the reconstruction of an avatar within
a few minutes ( ∼10 min) and can be driven at interactive
frame rates. For easy accessibility, we rely on commodity
hardware to train and capture the avatar. Specifically, we
use a single RGB camera to record the input video. State-
of-the-art methods that use similar input data to reconstruct
a human avatar require a relatively long time to train, rang-
ing from around one day [20] to almost a week [16,58]. Our
approach uses dynamic neural radiance fields [16] based on
neural graphics primitives [38], which are embedded around
a parametric face model [25], allowing low training times
and fast evaluation. In contrast to existing methods, we use
a metrical face reconstruction [59] to ensure that the avatar
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4574
has metrical dimensions such that it can be viewed in an
AR/VR scenario where objects of known size are present.
We employ a canonical space where the dynamic neural
radiance field is constructed. Leveraging the motion esti-
mation employing the parametric face model FLAME [25],
we establish a deformation field around the surface using a
bounding volume hierarchy (BVH) [12]. Using this defor-
mation field, we map points from the deformed space into
the canonical space, where we evaluate the neural radiance
field. As the surface deformation of the FLAME model does
not include details like wrinkles or the mouth interior, we
condition the neural radiance field by the facial expression
parameters. To improve the extrapolation to novel views,
we further leverage the FLAME-based face reconstruction
to provide a geometric prior in terms of rendered depth
maps during training of the NeRF [36]. In comparison to
state-of-the-art methods like NeRFace [16], IMAvatar [58],
or Neural Head Avatars (NHA) [20], our method achieves a
higher rendering quality while being significantly faster to
train and evaluate. We quantify this improvement in a series
of experiments, including an ablation study on our method.
In summary, we present Instant V olumetric Head Avatars
with the following contributions:
• a surface-embedded dynamic neural radiance field
based on neural graphics primitives, which allows us
to reconstruct metrical avatars in a few minutes instead
of hours or days,
• and a 3DMM-driven geometry regularization of the
dynamic density field to improve pose extrapolation,
an important aspect of AR/VR applications.
|
Zou_Generalized_Decoding_for_Pixel_Image_and_Language_CVPR_2023 | Abstract
We present X-Decoder, a generalized decoding model
that can predict pixel-level segmentation and language to-
kens seamlessly. X-Decoder takes as input two types of
queries: ( i) generic non-semantic queries and ( ii) semantic
queries induced from text inputs, to decode different pixel-
level and token-level outputs in the same semantic space.
With such a novel design, X-Decoder is the first work that
provides a unified way to support all types of image segmen-
tation and a variety of vision-language (VL) tasks. With-
out any pseudo-labeling, our design enables seamless in-
teractions across tasks at different granularities and brings
mutual benefits by learning a common and rich pixel-level
understanding. After pretraining on a mixed set of a lim-
ited amount of segmentation data and millions of image-text
pairs, X-Decoder exhibits strong transferability to a wide
range of downstream tasks in both zero-shot and finetuning
settings. Notably, it achieves (1) state-of-the-art results onopen-vocabulary segmentation and referring segmentation
on seven datasets; (2) better or competitive finetuned per-
formance to other generalist and specialist models on seg-
mentation and VL tasks; and (3) flexibility for efficient fine-
tuning and novel task composition (e.g., referring caption-
ing and image editing shown in Fig. 1). Code, demo, video
and visualization are available at: https://x-decoder-
vl.github.io .
| 1. Introduction
Visual understanding at different levels of granularity
has been a longstanding problem in the vision community.
The tasks span from image-level tasks ( e.g., image clas-
sification [14], image-text retrieval, image captioning [8],
and visual question answering (VQA) [2]), region-level lo-
calization tasks ( e.g., object detection and phrase ground-
ing [58]), to pixel-level grouping tasks ( e.g., image in-
The work is initiated during an internship at Microsoft.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15116
stance/semantic/panoptic segmentation [27, 35, 48]). Until
recently, most of these tasks have been separately tackled
with specialized model designs, preventing the synergy of
tasks across different granularities from being exploited. In
light of the versatility of transformers [67], we are now wit-
nessing a growing interest in building general-purpose mod-
els that can learn from and be applied to a diverse set of
vision and vision-language tasks, through multi-task learn-
ing [26, 30], sequential decoding [7, 50, 71, 80], or unified
learning strategy [79, 85, 88, 89]. While these works have
shown encouraging cross-task generalization capabilities,
most target the unification of image-level and region-level
tasks, leaving the important pixel-level understanding un-
derexplored. In [7, 50], the authors attempt to unify seg-
mentation into a decoding of a coordinate sequence or a
color map, which, however, produces suboptimal perfor-
mance and limited support for open-world generalization.
Arguably, understanding images down to the pixel level
is one of the most important yet challenging problems in
that: (1) pixel-level annotations are costly and undoubt-
edly much more scarce compared to other types of anno-
tations; (2) grouping every pixel and recognizing them in
an open-vocabulary manner is less studied; and (3) more
importantly, it is non-trivial to learn from data at two sub-
stantially different granularities while also obtaining mutual
benefits. Some recent efforts have attempted to bridge this
gap from different aspects. In [12], Chen et al. propose
a unified architecture Mask2Former that tackles all three
types of segmentation tasks but in a closed set. To support
open vocabulary recognition, a number of works study how
to transfer or distill rich semantic knowledge from image-
level vision-language foundation models such as CLIP [59]
and ALIGN [32] to specialist models [17,24,60]. However,
all these initial explorations focus on specific segmentation
tasks of interest and do not show generalization to tasks at
different granularities. In this work, we take one step fur-
ther to build a generalized decoder called X-Decoder1to-
wards the unification of pixel-level and image-level vision-
language understanding, as shown in Figur |
Zhong_Understanding_Imbalanced_Semantic_Segmentation_Through_Neural_Collapse_CVPR_2023 | Abstract
A recent study has shown a phenomenon called neural
collapse in that the within-class means of features and the
classifier weight vectors converge to the vertices of a sim-
plex equiangular tight frame at the terminal phase of train-
ing for classification. In this paper, we explore the cor-
responding structures of the last-layer feature centers and
classifiers in semantic segmentation. Based on our empir-
ical and theoretical analysis, we point out that semantic
segmentation naturally brings contextual correlation and
imbalanced distribution among classes, which breaks the
equiangular and maximally separated structure of neural
collapse for both feature centers and classifiers. However,
such a symmetric structure is beneficial to discrimination
for the minor classes. To preserve these advantages, we in-
troduce a regularizer on feature centers to encourage the
network to learn features closer to the appealing struc-
ture in imbalanced semantic segmentation. Experimental
results show that our method can bring significant improve-
ments on both 2D and 3D semantic segmentation bench-
marks. Moreover, our method ranks 1stand sets a new
record (+ 6.8% mIoU) on the ScanNet200 test leaderboard.
| 1. Introduction
The solution structures of the last-layer representation
and classifier provide a geometric perspective to delve into
the learning behaviors in a deep neural network. The neu-
ral collapse phenomenon discovered by Papyan et al. [49]
reveals that as a classification model is trained towards con-
vergence on a balanced dataset, the last-layer feature centers
of all classes will be located on a hyper-sphere with maxi-
mal equiangular separation, as known as a simplex equian-
gular tight frame (ETF), which means that any two centers
have an equal cosine similarity, as shown in Fig. 1a. The
final classifiers will be formed as the same structure and
aligned with the feature centers. The following studies try
to theoretically explain this elegant phenomenon, showing
*Equal contribution. Part of the work was done in MEGVII.
◆
◆◆◆◆◆
+▲▲▲
▲▲▲▘
▘▘▘
▘
▘
+++++(a)
◆
◆◆◆◆◆
+▲
▲▲▲
+
▘◆◆◆ (b)
Figure 1. Illustration of equiangular separation (a) and non-
equiangular separation (b) in a 3D space. Neural collapse reveals
the structure in (a), where features are collapsed into their within-
class centers with maximal equiangular separation as a simplex
ETF, and classifiers are aligned with the same structure. We ob-
serve that in semantic segmentation the feature centers and clas-
sifiers do not satisfy such a structure, as illustrated in (b) for an
example. As some minor class features and classifier vectors lie in
a close position, the discriminate ability of the network degrades.
that neural collapse is the global optimality under the cross-
entropy (CE) and mean squared error (MSE) loss functions
in an approximated model [21, 22, 27, 32, 44, 47, 51, 64, 66,
80]. However, all the current studies on neural collapse fo-
cus on the training in image recognition, which performs
classification for each image. Semantic segmentation as an
important pixel-wise classification problem receives no at-
tention from the neural collapse perspective yet.
In this paper, we explore the solution structures of fea-
ture centers and classifiers in semantic segmentation. Sur-
prisingly, it is observed that the symmetric equiangular sep-
aration as instructed by the neural collapse phenomenon in
image recognition does not hold in semantic segmentation
for both feature centers and classifiers. An example of non-
equiangular separation is illustrated in Fig. 1b. We point out
two reasons that may explain the difference.
First, classification benchmark datasets usually have low
correlation among classes. In contrast, different classes in
the semantic segmentation task are contextually related. In
this case, the classifier needs to be adaptable to class cor-
relation, so does not necessarily equally separate the label
space. We conduct a simple experiment to verify it:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19550
Classifier ScanNet200 ADE20K
Learned 27.8↓ 44.5↓
Fixed 26.5 ↓ 43.6↓
It is shown that a semantic segmentation model with the
classifier fixed as a simplex ETF performs much worse than
a learnable classifier. Although using a fixed classifier of
the simplex ETF structure has been proven to be effective
for image recognition [22, 70, 80], we hold that in semantic
segmentation the classifier needs to be learnable and does
not have to be equiangular.
Second, the neural collapse phenomenon observed in im-
age recognition highly relies on a balanced class distribu-
tion of training samples. It is indicated that neural collapse
will be broken when data imbalance emerges, which ex-
plains the deteriorated performance of training on imbal-
anced data [21]. We notice that semantic segmentation nat-
urally suffers from data imbalance because some semantic
classes are prone to cover a large area with significantly
more points/pixels. Under the point/pixel-wise classifica-
tion loss, the gradients will be also extremely imbalanced
with respect to the backbone parameters, which breaks the
equiangular separation structure for feature centers. In this
case, the network makes the feature and classifier of minor
classes lie in a close position and does not have the ability to
discriminate the minor classes. However, the simplex ETF
structure in neural collapse renders feature centers equian-
gular separation and the maximal discriminative ability,
which is able to effectively improve the performance of mi-
nor classes in imbalanced recognition [36, 70, 79].
Inspired by our observations and analyses, we propose
to induce the simplex ETF structure for feature centers, but
keep a learnable classifier to enable adaptive class correla-
tion for semantic segmentation. To this end, we propose
an accompanied center regularization branch that extracts
the feature centers of each semantic class. We regularize
them by another classifier layer that is fixed as a simplex
ETF. The fixed classifier forces feature centers to be aligned
with the appealing structure, which enjoys the equiangu-
lar separation and the maximal discriminative ability. It in
turn helps the feature learning in the original branch to im-
prove the performance of minor classes for better semantic
segmentation quality. We also provide theoretical results
for a rigorous explanation. Our method can be easily inte-
grated into any segmentation architecture and experimental
results also show that our simple method consistently brings
improvements on multiple image and point cloud semantic
segmentation benchmarks.
Our overall contributions can be listed as follows:
• We are the first to explore neural collapse in seman-
tic segmentation. We show that semantic segmentation
naturally brings contextual correlation and imbalanced
distribution among classes, which breaks the symmet-
ric structure of neural collapse for both feature centers
and classifiers.• We propose a center collapse regularizer to encour-
age the network to learn class-equiangular and class-
maximally separated structured features for imbal-
anced semantic segmentation.
• Our method is able to bring significant improvements
on both point cloud and image semantic segmentation.
Moreover, our method ranks 1stand sets a new record
(+6.8mIoU) on the ScanNet200 test leaderboard.
|
Zhong_Blur_Interpolation_Transformer_for_Real-World_Motion_From_Blur_CVPR_2023 | Abstract
This paper studies the challenging problem of recovering
motion from blur, also known as joint deblurring and inter-
polation or blur temporal super-resolution. The challenges
are twofold: 1) the current methods still leave considerable
room for improvement in terms of visual quality even on the
synthetic dataset, and 2) poor generalization to real-world
data. To this end, we propose a blur interpolation trans-
former (BiT) to effectively unravel the underlying temporal
correlation encoded in blur. Based on multi-scale residual
Swin transformer blocks, we introduce dual-end temporal
supervision and temporally symmetric ensembling strate-
gies to generate effective features for time-varying motion
rendering. In addition, we design a hybrid camera sys-
tem to collect the first real-world dataset of one-to-many
blur-sharp video pairs. Experimental results show that BiT
has a significant gain over the state-of-the-art methods on
the public dataset Adobe240. Besides, the proposed real-
world dataset effectively helps the model generalize well
to real blurry scenarios. Code and data are available at
https://github.com/zzh-tech/BiT.
| 1. Introduction
Aside from time-lapse photography, motion blur is usu-
ally one of the most undesirable artifacts during photo
shooting. Many works have been devoted to studying how
to recover sharp details from the blur, and great progress
has been made. Recently, starting from Jin et al. [9], the
community has focused on the more challenging task of re-
covering high-frame-rate sharp videos from blurred images,
which can be collectively termed joint deblurring and inter-
polation [37, 38] or blur temporal super-resolution [26, 33–
35]. This joint task can serve various applications, such as
video visual perception enhancement, slow motion gener-
ation [26], and fast moving object analysis [33–35]. For
brevity, we will refer to this task as blur interpolation.
Recent works [7, 8, 37] demonstrate that the joint ap-
proach outperforms schemes that cascade separate deblur-ring and video frame interpolation methods. Most joint
approaches follow the center-frame interpolation pipeline,
which means that they can only generate latent frames
for middle moments in a recursive manner. DeMFI [26]
breaks this constraint by combining self-induced feature-
flow-based warping and pixel-flow-based warping to syn-
thesize latent sharp frame at arbitrary time t. However, even
on synthetic data, the performance of current methods is
still far from satisfactory for human perception. We find that
the potential temporal correlation in blur has been underuti-
lized, which allows huge space for performance improve-
ment of the blur interpolation algorithm. In addition, blur
interpolation suffers from the generalization issue because
there is no real-world dataset to support model training.
The goal of this work is to resolve the above two issues.
In light of the complex distribution of time-dependent re-
construction and temporal symmetry property, we propose
dual-end temporal supervision (DTS) and temporally sym-
metric ensembling (TSE) strategies to enhance the shared
temporal features of blur interpolation transformer (BiT) for
time-varying motion reconstruction. In addition, a multi-
scale residual Swin transformer block (MS-RSTB) is intro-
duced to empower the model with the ability to effectively
handle the blur in different scales and to fuse information
from adjacent frames. Due to our design, BiT achieves
state-of-the-art on the public benchmark performance even
without optical flow-based warping operations. Meanwhile,
to provide a real-world benchmark to the community, we
further design an accurate hybrid camera system follow-
ing [32, 51] to capture a dataset (RBI) containing time-
aligned low-frame-rate blurred and high-frame-rate sharp
video pairs. Thanks to RBI, the real data generalization
problem of blur interpolation can be greatly alleviated, and
a more reasonable evaluation platform becomes available.
With these improvements, our model presents impressive
arbitrary blur interpolation performance, and we show an
example of extracting 30 frames of sharp motion from the
blurred image in Fig. 1 for reference.
Our contributions can be summarized as follows: 1) We
propose a novel transformer-based model, BiT, for arbitrary
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5713
Figure 1. Arbitrary blur interpolation by BiT. This is an example of generating 30 sharp frames from blurred image using BiT.
time motion from blur reconstruction. BiT outperforms
prior art quantitatively and qualitatively with faster speed.
2) We present and verify two successful strategies includ-
ing dual-end temporal supervision and temporally symmet-
ric ensembling to enhance the shared temporal features for
arbitrary time motion reconstruction. 3) To the best of our
knowledge, we provide the first real-world dataset for gen-
eral blur interpolation tasks. We verify the validity of this
real dataset and its meaningfulness to the community by ex-
tensive experiments.
|
Zhu_Understanding_the_Robustness_of_3D_Object_Detection_With_Birds-Eye-View_Representations_CVPR_2023 | Abstract
3D object detection is an essential perception task in
autonomous driving to understand the environments. The
Bird’s-Eye-View (BEV) representations have significantly
improved the performance of 3D detectors with camera in-
puts on popular benchmarks. However, there still lacks a
systematic understanding of the robustness of these vision-
dependent BEV models, which is closely related to the safety
of autonomous driving systems. In this paper, we evaluate
the natural and adversarial robustness of various represen-
tative models under extensive settings, to fully understand
their behaviors influenced by explicit BEV features com-
pared with those without BEV . In addition to the classic
settings, we propose a 3D consistent patch attack by ap-
plying adversarial patches in the 3D space to guarantee the
spatiotemporal consistency, which is more realistic for the
scenario of autonomous driving. With substantial experi-
ments, we draw several findings: 1) BEV models tend to be
more stable than previous methods under different natural
conditions and common corruptions due to the expressive
spatial representations; 2) BEV models are more vulnera-
ble to adversarial noises, mainly caused by the redundant
BEV features; 3) Camera-LiDAR fusion models have supe-
rior performance under different settings with multi-modal
inputs, but BEV fusion model is still vulnerable to adver-
sarial noises of both point cloud and image. These findings
alert the safety issue in the applications of BEV detectors
and could facilitate the development of more robust models.
*Equal Contribution.BCorresponding authors. This work was done
when Zijian Zhu and Hai Chen were visiting Tsinghua University. | 1. Introduction
Autonomous driving systems have great demand for reli-
able 3D object detection models [21], which aim to predict
3D bounding boxes and categories of road objects, in order
to understand the surroundings. To extract holistic repre-
sentations in the 3D space, the Bird’s-Eye-View (BEV) is
commonly adopted as a unified representation [30], since
it contains both locations and semantic features of objects
without being affected by occlusion, and shows promise for
various 3D perception tasks in autonomous driving, such
as map restoration [41, 44]. Although being broadly used
for LiDAR point clouds [28, 65], the BEV representation
has recently achieved great success for 3D object detec-
tion with multiple cameras, arousing tremendous attention
from both industry and academia due to low cost of cam-
era sensors and better exploitation of semantic information
in images. These vision-dependent BEV models1typically
project 2D image features to explicit BEV feature maps in
the 3D space and make predictions based on BEV features
[25,26,31,32,35]. As representative models, BEVDet [26],
BEVDepth [31] and BEVFusion [35] distribute the 2D fea-
tures into 3D space according to the estimated depth map,
while BEVFormer [32] adopts cross attention to query BEV
features from 2D images. With expressive spatial semantics
of BEV , these models achieve the state-of-the-art results on
popular benchmarks ( e.g., nuScenes [8]).
Despite the excellent performance, these models are still
far from practical deployment due to the robustness issues.
Previous works have shown that deep learning models are
1In this paper, we use the term vision-dependent BEV models to indi-
cate both camera-only and LiDAR-camera fusion BEV models.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21600
Natural RobustnessAdversarialRobustnessCommon CorruptionsPartialCamerasWeather & Lightingℓ"Adversarial Perturbations2D Adversarial Patch Attack3D ConsistentPatch AttackFGSM, PGD, TI-MIMWhite/Black-boxInstance-specificCategory-specificMulti-view OverlapTemporally Universal(a) Settings for Robustness Evaluation
timeT=1T=2T=3
RelativeMoving Direction
2D Image Features3D BEV Features2D-to-3D ProjectionBackboneDetection Head
timeT=1T=2T=3
(b) Pipeline of 3D Consistent Patch Attack
Figure 1. Overview. (a) We measure the natural and adversarial robustness of vision-dependent BEV models in 3D object detection under
various settings to thoroughly understand the influence of explicit BEV representations on robustness. (b) By pasting an adversarial patch
on a car in the 3D space and projecting it to the 2D images, the generated patch is aligned spatially (across adjacent cameras) and temporally
(across continuous frames). The patch cloaks the car in all 3 frames from BEVDepth [31], bringing high safety risks to autonomous driving.
vulnerable to adversarial examples [20, 51], common cor-
ruptions [22], natural transformations [14, 15], etc. The
robustness issues rooted in the data-driven deep learning
based 3D object detectors can raise severe concerns about
the safety and reliability of autonomous driving, making
it imperative to evaluate and understand model robustness
before being deployed. As vision-dependent BEV mod-
els achieve superior performance and become increasingly
prevalent in the field, it is of particular importance to com-
pare their robustness to other models that do not rely on
BEV representations, given the inherent trade-off between
accuracy and robustness [47, 61].
In this paper, we take the first step to systematically ana-
lyze and understand the robustness of representative vision-
dependent BEV models, by performing thorough experi-
mental evaluations ranging from natural robustness to ad-
versarial robustness as illustrated in Fig. 1(a). We draw sev-
eral important findings as below:
• We first evaluate the natural robustness under common
corruptions, various weather and lighting conditions,
and partially missing cameras. We find that camera-
based BEV models are generally more robust to natu-
ral corruptions of images as a result of the rich spatial
information carried by BEV representations.
• We then evaluate the adversarial robustness under the
global ℓpadversarial perturbations, instance-level and
category-level adversarial patches. We observe that
BEV models are more vulnerable to adversarial noises,
owing to the redundant spatial features represented by
BEV based on an in-depth analysis.
• Based on the results, we find that camera-LiDAR fu-
sion models have superior performance under all set-
tings due to the aid of multi-modal inputs. Besides,
BEVFusion [35] is less robust when both point cloud
and image perturbations are imposed.In addition to digital adversarial patches, we propose a
novel attack method called 3D consistent patch attack . As
shown in Fig. 1(b), adversarial patches are attached to ob-
jects for sptiotemporal consistency in the 3D space. We pro-
vide two case studies of 3D consistent patch attack. First,
we paste patches on objects falling into the overlap regions
of multiple cameras, which are observed in different shapes
from different viewpoints. Second, we generate temporally
universal patches for objects across a continuous sequence
of frames in a certain scene, which is a step further from
case one. Both spatial alignment and temporal consistency
are considered, which distinguishes 3D object detection for
autonomous driving cars from the traditional 2D object de-
tection task. The conclusions are consistent with those of
adversarial robustness above and can inspire more works to
guarantee safe autonomous driving.
|
Zhou_Improving_Weakly_Supervised_Temporal_Action_Localization_by_Bridging_Train-Test_Gap_CVPR_2023 | Abstract
The task of weakly supervised temporal action localiza-
tion targets at generating temporal boundaries for actions
of interest, meanwhile the action category should also be
classified. Pseudo-label-based methods, which serve as
an effective solution, have been widely studied recently.
However, existing methods generate pseudo labels during
training and make predictions during testing under differ-
ent pipelines or settings, resulting in a gap between train-
ing and testing. In this paper, we propose to generate
high-quality pseudo labels from the predicted action bound-
aries. Nevertheless, we note that existing post-processing,
like NMS, would lead to information loss, which is insuf-
ficient to generate high-quality action boundaries. More
importantly, transforming action boundaries into pseudo
labels is quite challenging, since the predicted action in-
stances are generally overlapped and have different confi-
dence scores. Besides, the generated pseudo-labels can be
fluctuating and inaccurate at the early stage of training. It
might repeatedly strengthen the false predictions if there is
no mechanism to conduct self-correction. To tackle these
issues, we come up with an effective pipeline for learn-
ing better pseudo labels. Firstly, we propose a Gaussian
weighted fusion module to preserve information of action
instances and obtain high-quality action boundaries. Sec-
ond, we formulate the pseudo-label generation as an opti-
mization problem under the constraints in terms of the con-
fidence scores of action instances. Finally, we introduce
the idea of ∆pseudo labels, which enables the model with
the ability of self-correction. Our method achieves supe-
rior performance to existing methods on two benchmarks,
THUMOS14 and ActivityNet1.3, achieving gains of 1.9%
on THUMOS14 and 3.7% on ActivityNet1.3 in terms of av-
erage mAP . Our code is available at https://github.
*Corresponding author.com/zhou745/GauFuse_WSTAL.git .
| 1. Introduction
The task of temporal action localization seeks to iden-
tify the action boundaries and to recognize action categories
that are performed in the video. Action localization can
contribute to video understanding, editing, etc. Previous
works [3, 19, 20, 43, 51] mainly solved this task in the fully
supervised setting, which requires both video-level labels
and frame-wise annotations. However, frame-wisely anno-
tating videos is labor-intensive and time-consuming. To re-
duce the annotation cost, researchers start to focus on the
weakly supervised setting. Considering the rich video re-
sources from various video websites and apps, weakly su-
pervised setting would save tremendous annotation efforts.
Unlike its supervised counterpart, the weakly supervised
temporal action localization task only requires video-level
category labels. The existing works mainly follow the
localization-by-classification pipeline [40,50], which trains
a video-level classifier with category labels [32], and ap-
plies the trained classifier to each video snippet1. However,
due to the lack of fine-grained annotations, the model may
assign high confidence to incorrect snippets such as the con-
textual background, which typically has a high correlation
with the video-level labels, or only focus on the salient snip-
pets, leading to incomplete localization results. There are
many studies [21, 23, 25] that tried to address this discrep-
ancy between classification and localization, and one of the
promising solutions is to generate and utilize pseudo labels.
The advantage of using pseudo labels is that snippets
are supervised with snippet-wise labels instead of video-
level labels. Existing works [28,36,46,47] achieve remark-
able results by introducing pseudo labels into this prob-
1We view snippets as the smallest granularity since the high-level fea-
tures of consecutive frames vary smoothly over time [12,42]. In our work,
we treat every 16 frames as a snippet
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23003
lem. A commonly used strategy for generating pseudo la-
bels is to directly utilize the temporal class activation map
(TCAM) generated in previous training iterations. Never-
theless, we would like to argue that the TCAMs are not
desirable pseudo labels. During testing, our goal is to ob-
tain the action boundaries, employing the TCAMs as train-
ing targets arise the discrepancy between training and test-
ing because they are quite different from the actual action
boundaries. An intuitive way to address this issue is to
leverage the predicted action boundaries as pseudo labels.
However, it is non-trivial to achieve this goal. First, cur-
rent post-processing schemes, such as NMS, would induce
a large amount of information loss and are not sufficient to
obtain high-quality action boundaries for generating effec-
tive pseudo labels. Second, the predicted action instances
are usually overlapped with each other and have different
confidence scores, it is hard to assign the action categories
and confidence scores for each snippet.
To address the above issues, we propose the following
two modules. First, we propose a Gaussian Weighted
Instance Fusion module to preserve information on the
boundary distributions and produce high-quality action
boundaries. Specifically, this module weightedly fuses the
information of overlapped action instances. Each candi-
date action instance is treated as an instance sampled from
a Gaussian distribution. The confidence score of each ac-
tion instance is viewed as its probability of being sampled.
Accordingly, we can obtain the most possible action bound-
aries and their confidence scores by estimating the means
of Gaussians from those candidate action instances. In this
way, we can produce better action boundaries, which in re-
turn help to generate more reasonable pseudo labels.
After generating high-quality action boundaries, we need
to convert them into snippet-wise pseudo labels. To han-
dle the overlapped action instances and assign snippets with
proper confidence scores, we propose a LinPro Pseudo La-
bel Generation module to formulate the process of pseudo-
label generation as a ℓ1-minimization problem. First, we
restrict that the average score of snippets within an action
boundary should be equal to the confidence score of this ac-
tion instance. This constraint guarantees that we can main-
tain the information of confidence scores in the generated
pseudo labels. Second, snippets within an action instance
might be equivalent in terms of their contribution to the con-
fidence score. Thus we require snippet-wise scores within
each action instance to be uniform. Based on the two con-
straints, we formulate the pseudo label generation as an op-
timization problem and solve it to obtain pseudo labels that
are consistent with our predicted action boundaries.
Furthermore, there is still one problem regarding to the
use of pseudo labels. Since the generated pseudo labels can
be fluctuating and inaccurate at early stage of training, with-
out a proper self-correction mechanism, the model would
keep generating wrong pseudo labels of high confidences at
later training stages. To address this issue, we propose to
utilize the ∆pseudo labels , instead of the original pseudo
labels, as our training targets. We calculate the differencebetween the pseudo labels of consecutive training epochs as
the∆pseudo labels. In general, the model would provide
more accurate predictions along with the training. In this
way, the model will update its predictions toward the class
with the confidence increasing instead of the class with the
largest pseudo label value, and thus empowers the model
with the ability of self-correction.
The contribution of this paper is four-fold. (a) We pro-
pose a Gaussian Weighted Instance Fusion module, which
can effectively generate high-quality action boundaries. (b)
We propose a novel LinPro Pseudo Label Generation strat-
egy by transforming the process of pseudo-label generation
into a ℓ1-minimization problem. (c) We propose to utilize
∆pseudo labels to enable model with self-correction ability
for the generated pseudo labels. (d) Compared with state-
of-the-art methods, the proposed framework yields signifi-
cant improvements of 1.9% and3.7% in terms of average
mAP on THUMOS14 and ActivityNet1.3, respectively.
|
Zhu_Confidence-Aware_Personalized_Federated_Learning_via_Variational_Expectation_Maximization_CVPR_2023 | Abstract
Federated Learning (FL) is a distributed learning
scheme to train a shared model across clients. One com-
mon and fundamental challenge in FL is that the sets of data
across clients could be non-identically distributed and have
different sizes. Personalized Federated Learning (PFL) at-
tempts to solve this challenge via locally adapted models.
In this work, we present a novel framework for PFL based
on hierarchical Bayesian modeling and variational infer-
ence. A global model is introduced as a latent variable to
augment the joint distribution of clients’ parameters and
capture the common trends of different clients, optimiza-
tion is derived based on the principle of maximizing the
marginal likelihood and conducted using variational expec-
tation maximization. Our algorithm gives rise to a closed-
form estimation of a confidence value which comprises the
uncertainty of clients’ parameters and local model devia-
tions from the global model. The confidence value is used to
weigh clients’ parameters in the aggregation stage and ad-
just the regularization effect of the global model. We evalu-
ate our method through extensive empirical studies on mul-
tiple datasets. Experimental results show that our approach
obtains competitive results under mild heterogeneous cir-
cumstances while significantly outperforming state-of-the-
art PFL frameworks in highly heterogeneous settings.
| 1. Introduction
Federated learning (FL) is a distributed learning frame-
work, in which clients optimize a shared model with their
local data and send back parameters after training, and a
central server aggregates locally updated models to obtain
a global model that it re-distributes to clients [24]. FL
is expected to address privacy concerns and to exploit the
computational resources of a large number of edge devices.
Despite these strengths, there are several challenges in the
*Authors with equal contribution.
†Work was done at KU Leuven prior to joining Amazon.application of FL. One of them is the statistical hetero-
geneity of client data sets since in practice clients’ data
correlate with local environments and deviate from each
other [13,18,19]. The most common types of heterogeneity
are defined as:
Label distribution skew. LetJbe the number of clients
and the data distribution of client jbePj(x, y)and rewrite
it as Pj(x|y)Pj(y), two kinds of non-identical scenarios
can be identified. One of them is label distribution skew ,
that is, the label distributions {Pj(y)}J
j=1are varying in
different clients but the conditional generating distributions
{Pj(x|y)}J
j=1are assumed to be the same. This could hap-
pen when certain types of data are underrepresented in the
local environment.
Label concept drift. Another common type of non-IID
scenario is label concept drift , in which the label distribu-
tions{Pj(y)}J
j=1are the same but the conditional generat-
ing distributions {Pj(x|y)}J
j=1are different across different
clients. This could happen when features of the same type
of data differ across clients and correlates with their envi-
ronments, e.g. the Labrador Retriever (most popular dog in
the United States) and the Border Collie (most popular dog
in Europe) look different, thus the dog pictures taken by the
clients in these two areas contain label concept drift .
Data quantity disparity. Additionally, clients may pos-
sess different amounts of data. Such data quantity disparity
can lead to inconsistent uncertainties of the locally updated
models and heterogeneity in the number of local updates. In
practice, the amount of data could span a large range across
clients, for example large hospitals usually have many more
medical records than clinics. In particular, data quantity dis-
tributions often exhibit that large datasets are concentrated
in a few locations, whereas a large amount of data is scat-
tered across many locations with small dataset sizes [11,32].
It has been proven that if federated averaging ( FedAvg
[24]) is applied, the aforementioned heterogeneity will slow
down the convergence of the global model and in some
cases leads to arbitrary deviation from the optimum [19,33].
Several works have been proposed to alleviate this prob-
lem [4,18,33]. Another stream of work is personalized fed-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accept |
Zhou_Interactive_Segmentation_As_Gaussion_Process_Classification_CVPR_2023 | Abstract
Click-based interactive segmentation (IS) aims to extract
the target objects under user interaction. For this task, most
of the current deep learning (DL)-based methods mainly
follow the general pipelines of semantic segmentation. Al-
beit achieving promising performance, they do not fully and
explicitly utilize and propagate the click information, in-
evitably leading to unsatisfactory segmentation results, even
at clicked points. Against this issue, in this paper, we propose
to formulate the IS task as a Gaussian process (GP)-based
pixel-wise binary classification model on each image. To
solve this model, we utilize amortized variational inference
to approximate the intractable GP posterior in a data-driven
manner and then decouple the approximated GP posterior
into double space forms for efficient sampling with linear
complexity. Then, we correspondingly construct a GP classi-
fication framework, named GPCIS, which is integrated with
the deep kernel learning mechanism for more flexibility. The
main specificities of the proposed GPCIS lie in: 1) Under the
explicit guidance of the derived GP posterior, the informa-
tion contained in clicks can be finely propagated to the entire
image and then boost the segmentation; 2) The accuracy of
predictions at clicks has good theoretical support. These
merits of GPCIS as well as its good generality and high
efficiency are substantiated by comprehensive experiments
on several benchmarks, as compared with representative
methods both quantitatively and qualitatively. Codes will be
released at https://github.com/zmhhmz/GPCIS CVPR2023.
| 1. Introduction
Driven by the huge potential in reducing the pixel-
wise annotation cost, interactive segmentation (IS) has
sparked much research interest [14], which aims to seg-
ment the target objects under user interaction with less
interaction cost. Among various types of user interac-
*Corresponding author
DeepFeatures
DeepFeaturesPredictionPredictionInput
Input
ForegroundFeatureSpaceBackground(a)(b)DNN
FeaturePixel-Wise Classifier
FeatureLabelDNN
𝑦𝐱𝑦
𝑦𝐱𝐱
GPPosterior
Figure 1. Classification procedure for an exemplar unclicked pixel
(blue box) in the IS task. (a) Most current deep learning-based IS
methods individually perform pixel-wise classification on the deep
feature x; (b) We formulate the IS task as a Gaussian process (GP)
classification model on each image, where red (green) clicks are
viewed as training data with foreground (background) labels, and
the unclicked pixel as the to-be-classified testing data. Based on the
derived GP posterior inference framework, the relations between
the deep feature xof the testing pixel (blue solid line) and that
of other pixels (dashed lines) can be finely modeled and then the
information at clicks can be propagated to the entire image for
improved prediction.
tion [1 –3, 27, 30, 49, 52, 54], in this paper, we focus on the
popular click-based mode, where positive annotations are
clicked on the target object while negative ones are clicked
in the background regions [7, 18, 25, 40, 41].
Recent years have witnessed the promising success of
deep learning (DL)-based methods in the IS task. The most
commonly adopted research line is that the user interaction
is encoded as click maps and fed into a deep neural network
(DNN) together with input images to extract deep features
for the subsequent segmentation [41, 51]. However, these
methods generally suffer from two limitations: 1) As shown
in Fig. 1 (a), after extracting the deep features, they generally
perform pixel-wise classification without specific designs for
the IS task. As a result, during the last-layer classification,
the deep features of different pixels are not fully interactive
and the information contained in clicked pixels cannot be
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19488
finely propagated to other pixels under explicit regularization.
2) There is no explicit theoretical support that the clicked
regions can be properly activated and correctly classified. Al-
though some researchers have proposed different strategies,
e.g., non-local-based modules [6] and the backpropagating
refinement scheme [18, 40], they usually incur extra compu-
tational cost and are not capable enough to deal with the two
problems simultaneously. Besides, the relations between
deep features of different pixels are generally characterized
and captured based on off-the-shelf network modules. Such
implicit design makes it hard to clearly understand the work-
ing mechanism underlying these methods.
To alleviate these aforementioned issues, inspired by the
intrinsic capabilities of Gaussian process (GP) models, e.g.,
explicitly measuring the relations between data points by
a kernel function, and promoting accurate predictions at
training data via interpolation, we rethink the IS task and
attempt to construct a GP-based inference framework for the
specific IS task. Concretely, as shown in Fig. 1 (b), we pro-
pose to treat the IS task from an alternative perspective and
reformulate it as a pixel-level binary classification problem
on each image, where clicks are viewed as training pixels
with classification labels, i.e., foreground or background,
and the unclicked points as the to-be-classified testing pixels.
With such understanding, we construct the corresponding
GP classification model. To solve it, we propose to utilize
the amortized variational inference to efficiently approxi-
mate the intractable GP posterior in a data-driven manner,
and then adopt the decoupling techniques [47, 48] to achieve
the GP posterior sampling with linear complexity. To im-
prove the learning flexibility, we further embed the deep
kernel learning strategy into the decoupled GP posterior in-
ference procedure. Finally, by correspondingly integrating
the derived GP posterior sampling mechanism with DNN
backbones, we construct a GP Classification-based Interac-
tive Segmentation framework, called GPCIS. In summary,
our contributions are mainly three-fold:
1) We propose to carefully formulate the IS task as a Gaus-
sian process classification model on each image. To adapt
the GP model to the IS task, we propose specific designs and
accomplish the approximation and efficient sampling of the
GP posterior, which are then effectively integrated with the
deep kernel learning mechanism for more flexibility.
2) We build a concise and clear interactive segmentation
network under a theoretically sound framework. As shown
in Fig. 1 (b), the correlation between the deep features of dif-
ferent pixels is modeled by GP posterior. With such explicit
regularization, the information contained in clicks can be
finely propagated to the entire image and boost the prediction
of unclicked pixels. Besides, our method can provide ratio-
nal theoretical support for accurate predictions at clicked
points. These merits are finely validated in Sec. 5.2.
3) Extensive experimental comparisons as well as model ver-ification comprehensively substantiate the superiority of our
proposed GPCIS in segmentation quality and interaction effi-
ciency. It is worth mentioning that the proposed GPCIS can
consistently achieve superior performance under different
backbone segmentors, showing its fine generality.
|
Zins_Multi-View_Reconstruction_Using_Signed_Ray_Distance_Functions_SRDF_CVPR_2023 | Abstract
In this paper, we investigate a new optimization frame-
work for multi-view 3D shape reconstructions. Recent
differentiable rendering approaches have provided break-
through performances with implicit shape representations
though they can still lack precision in the estimated geome-
tries. On the other hand multi-view stereo methods can
yield pixel wise geometric accuracy with local depth pre-
dictions along viewing rays. Our approach bridges the gap
between the two strategies with a novel volumetric shape
representation that is implicit but parameterized with pixel
depths to better materialize the shape surface with consis-
tent signed distances along viewing rays. The approach re-
tains pixel-accuracy while benefiting from volumetric inte-
gration in the optimization. To this aim, depths are opti-
mized by evaluating, at each 3D location within the vol-
umetric discretization, the agreement between the depth
prediction consistency and the photometric consistency for
the corresponding pixels. The optimization is agnostic to
the associated photo-consistency term which can vary from
a median-based baseline to more elaborate criteria, e.g.
learned functions. Our experiments demonstrate the ben-
efit of the volumetric integration with depth predictions.
They also show that our approach outperforms existing ap-
proaches over standard 3D benchmarks with better geome-
try estimations.
| 1. Introduction
Reconstructing 3D shape geometries from 2D image
observations has been a core issue in computer vision
for decades. Applications are numerous and range from
robotics to augmented reality and human digitization,
among others. When images are available in sufficient
numbers, multi-view stereo (MVS) is a powerful strategy
that has emerged in the late 90s (see [58]). In this strategy,
3D geometric models are built by searching for surface
Figure 1. Reconstructions with various methods using 14images
of a model from BlendedMVS [70].
locations in 3D where 2D image observations concur, a
property called photo-consistency. This observation con-
sistency strategy has been later challenged by approaches
in the field that seek instead for observation fidelity
using differentiable rendering. Given a shape model that
includes appearance information, rendered images can
be compared to observed images and the model can thus
be optimized. Differentiable rendering adapts to several
shape representations including point clouds, meshes and,
more recently, implicit shape representations. The latter
can account for occupancy, distance functions or densities,
which are estimated either directly over discrete grids or
through continuous MLP network functions. Associated to
differentiable rendering these implicit representations have
provided state-of-the-art approaches to recover both the
geometry and the appearance of 3D shapes from 2D images.
With the objective to improve the precision of the recon-
structed geometric models and their computational costs,
we investigate an approach that takes inspiration from dif-
ferentiable rendering methods while retaining beneficial as-
pects of MVS strategies. Following volumetric methods we
use a volumetric signed ray distance representation which
we parameterize with depths along viewing rays, a rep-
resentation we call the Signed Ray Distance Function or
SRDF. This representation makes the shape surface explicit
with depths while keeping the benefit of better distributed
gradients with a volumetric discretization. To optimize this
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16696
shape representation we introduce an unsupervised differ-
entiable volumetric criterion that, in contrast to differen-
tiable rendering approaches, does not require color estima-
tion. Instead, the criterion considers volumetric 3D samples
and evaluates whether the signed distances along rays agree
at a sample when it is photo-consistent and disagree other-
wise. While being volumetric our proposed approach shares
the following MVS benefits:
i) No expensive ray tracing in addition to color decisions
is required;
ii) The proposed approach is pixel-wise accurate by con-
struction;
iii) The optimization can be performed over groups of
cameras defined with visibility considerations. The lat-
ter enables parallelism between groups while still en-
forcing consistency over depth maps.
In addition, the volumetric scheme provides a testbed to
compare different photo-consistency priors in a consistent
way with space discretizations that do not depend on the
estimated surface.
To evaluate the approach, we conducted experiments on
real data from DTU Robot Image Data Sets [23], Blended-
MVS [70] and on synthetic data from Renderpeople [3] as
well as on real human capture data. Ablation tests demon-
strate the respective contributions of the SRDF parametriza-
tion and the volumetric integration in the shape reconstruc-
tion process. Comparisons with both MVS and differential
rendering methods also show that our method consistently
outperforms state-of-the-art both quantitatively and qualita-
tively with better geometric details.
|
Zhu_Continual_Semantic_Segmentation_With_Automatic_Memory_Sample_Selection_CVPR_2023 | Abstract
Continual Semantic Segmentation (CSS) extends static
semantic segmentation by incrementally introducing new
classes for training. To alleviate the catastrophic forgetting
issue in CSS, a memory buffer that stores a small number
of samples from the previous classes is constructed for re-
play. However, existing methods select the memory samples
either randomly or based on a single-factor-driven hand-
crafted strategy, which has no guarantee to be optimal. In
this work, we propose a novel memory sample selection
mechanism that selects informative samples for effective re-
play in a fully automatic way by considering comprehen-
sive factors including sample diversity and class perfor-
mance. Our mechanism regards the selection operation as
a decision-making process and learns an optimal selection
policy that directly maximizes the validation performance
on a reward set. To facilitate the selection decision, we de-
sign a novel state representation and a dual-stage action
space. Our extensive experiments on Pascal-VOC 2012 and
ADE 20K datasets demonstrate the effectiveness of our ap-
proach with state-of-the-art (SOTA) performance achieved,
outperforming the second-place one by 12.54% for the 6-
stage setting on Pascal-VOC 2012.
| 1. Introduction
Semantic segmentation is an important task with a lot
of applications. The rapid development of algorithms
[11, 20, 22, 30, 32, 56] and the growing number of publicly
available large datasets [14, 55] have led to great success
in the field. However, in many scenarios, the static model
cannot always meet real-world demands, as the constantly
changing environment calls for the model to be constantly
updated to deal with new data, sometimes with new classes.
A naive solution is to apply continual learning by incre-
mentally adding new classes to train the model. However, it
*Equal Contribution
†Corresponding Authoris not simple as it looks – almost every time, since the pre-
vious classes are inaccessible in the new stage, the model
forgets the information of them after training for the new
classes. This phenomenon, namely catastrophic forgetting,
has been a long-standing issue in the field. Furthermore,
the issue is especially severe in dense prediction tasks like
semantic segmentation.
Facing the issue, existing works [1, 4, 5, 7, 17, 25, 26, 38,
43] propose to perform exemplar replay by introducing a
memory buffer to store some samples from previous classes.
By doing so, the model can be trained with samples from
both current and previous classes, resulting in better gener-
alization. However, since the number of selected samples
in the memory is much smaller than those within the new
classes, the selected samples are easy to be ignored or cause
overfitting when training due to the small number. Careful
selection of the samples is required, which naturally brings
the question: How to select the best samples for replay?
Some attempts have been made to answer the question,
aiming to seek the most effective samples for replay. Re-
searchers propose different criteria that are mostly manu-
ally designed based on some heuristic factors like diver-
sity [1, 4, 5, 25, 26, 38, 43]. For example, [33] selects the
most common samples with the lowest diversity for replay,
believing that the most representative samples will elevate
the effectiveness of replay. However, the most common
samples may not always be the samples being forgotten
in later stages. [4] proposes to save both the low-diversity
samples near the distribution center and high-diversity sam-
ples near the classification boundaries. However, new chal-
lenges arise since the memory length is limited, so it is
challenging to find the optimal quotas for the two kinds of
samples to promote replay effectiveness to the greatest ex-
tent. Moreover, most of the existing methods are designed
based on a single factor, the selection performance, how-
ever, can be influenced by many factors with complicated
relationships. For example, besides diversity, memory sam-
ple selection should also be class-dependent because the
hard classes need more samples to replay in order to allevi-
ate the more severe catastrophic forgetting issue. Therefore,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3082
we argue that it is necessary to select memory samples in a
more intelligent way by considering the more comprehen-
sive factors and their complicated relationships.
Witnessing the challenge, in this work, we propose
a novel automatic sample selection mechanism for CSS.
Our key insight is that selecting memory samples can be
regarded as a decision-making task in different training
stages, so we formulate the sample selection process as a
Markov Decision Process, and we propose to solve it au-
tomatically with a reinforcement learning (RL) framework.
Specifically, we employ an agent network to make the se-
lection decision, which receives the state representation as
the input and selects optimal samples for replay. To help the
agent make wiser decisions, we construct a novel and com-
prehensive state combined with the sample diversity and
class performance features. In the process of state com-
putation, the inter-sample similarity needs to be measured.
We found the naive similarity measurement by computing
the prototype distance is ineffective in segmentation, as the
prototype losses the local structure details that are important
for making pixel-level predictions. Therefore, we propose a
novel similarity measured in a multi-structure graph space
to get a more informative state. We further propose a dual-
stage action space, in which the agent not only selects the
most appropriate samples to update the memory, but also
enhances the selected samples to have better replay effec-
tiveness in a gradient manner. All the careful designs allow
the RL mechanism to be effective in solving the sample se-
lection problem for CSS.
We perform extensive experiments on Pascal-VOC 2012
and ADE 20K datasets, which demonstrate the effective-
ness of our proposed novel paradigm for CSS. Benefit-
ing from the reward-driven optimization, the automatically
learned policy can help select the more effective samples,
thus resulting in better performance than the previous strate-
gies. On both datasets, our method achieves state-of-the-art
(SOTA) performance. To summarize, our contributions are
as follows:
• We formulate the sample selection of CSS as a Markov
Decision Process, and introduce a novel and effective
automatic paradigm for sample replay in CSS enabled
by reinforcement learning.
• We design an effective RL paradigm tailored for CSS,
with novel state representations containing multiple
factors that can guide the selection decision, and a
dual-stage action space to select samples and boost
their replay effectiveness.
• Extensive experiments demonstrate our automatic
paradigm for sample replay can effectively alleviate
the catastrophic forgetting issue with state-of-the-art
(SOTA) performance achieved. |
Zhu_Probability-Based_Global_Cross-Modal_Upsampling_for_Pansharpening_CVPR_2023 | Abstract
Pansharpening is an essential preprocessing step for re-
mote sensing image processing. Although deep learning
(DL) approaches performed well on this task, current up-
sampling methods used in these approaches only utilize the
local information of each pixel in the low-resolution multi-
spectral (LRMS) image while neglecting to exploit its global
information as well as the cross-modal information of the
guiding panchromatic (PAN) image, which limits their per-
formance improvement. To address this issue, this paper
develops a novel probability-based global cross-modal up-
sampling (PGCU) method for pan-sharpening. Precisely,
we first formulate the PGCU method from a probabilis-
tic perspective and then design an efficient network mod-
ule to implement it by fully utilizing the information men-
tioned above while simultaneously considering the chan-
nel specificity. The PGCU module consists of three blocks,
i.e., information extraction (IE), distribution and expecta-
tion estimation (DEE), and fine adjustment (FA). Exten-
sive experiments verify the superiority of the PGCU method
compared with other popular upsampling methods. Addi-
tionally, experiments also show that the PGCU module can
help improve the performance of existing SOTA deep learn-
ing pansharpening methods. The codes are available at
https://github.com/Zeyu-Zhu/PGCU .
| 1. Introduction
Pansharpening aims to reconstruct a high-resolution
multispectral image (HRMS) from a low-resolution multi-
spectral image (LRMS) under the guidance of a panchro-
matic image (PAN). It’s an indispensable pre-processing
step for many subsequent remote sensing tasks, such as
*Corresponding author
Figure 1. Comparison between local upsampling methods and our
proposed PGCU method. The local method has limited receptive
field and thus only utilizes the local information of LRMS for up-
sampling, while our proposed PGCU method can fully exploit the
rich global information of LRMS and the cross-modal global in-
formation of PAN.
object detection [ 11,26], change detection [ 1,19], unmix-
ing [3] and classification [ 7,8].
The last decades have witnessed the great development
of pansharpening methods. The typical approaches include
component substitution (CS) approaches [ 10,18,23,24],
multi-resolution analysis (MRA) methods [ 21,25,31], and
variational optimization (VO) methods [ 12,13,15,16,38].
Recently, with the rapid development of deep learning,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14039
plenty of deep learning-based methods [ 4,5,14,43,45] have
been proposed to tackle this task due to its powerful non-
linear fitting and feature extraction ability. Among these
methods, almost all the approaches have a pipeline that up-
samples the LRMS image first and then carries out other
super-resolution operations. These approaches treat upsam-
pling as an essential and indispensable component for this
task. For instance, as for residual networks (e.g., PanNet),
the upsampled image is directly added to the network’s out-
put, which makes the quality of the upsampled image an
essential factor for model performance.
However, hardly any approaches explored to design a
reasonable upsampling method for pansharpening but just
simply utilized bicubic interpolation [ 9] and transposed
convolution [ 17] as their upsampling module. At the same
time, upsampling methods proposed for other tasks aren’t
suitable for pansharpening either, such as attention-based
image upsampling (ABIU) [ 22] and ESPCNN [ 32]. Almost
all the aforementioned upsampling methods are in the form
of local interpolation and thus suffer from a limited recep-
tive field issue. Therefore, these local interpolation-based
upsampling methods fail to exploit similar patterns globally,
while there are usually many non-local similar patches in
remote sensing images, as shown in Figure 1(b). Addition-
ally, almost all these upsampling methods are not capable of
utilizing useful structure information from the PAN image.
Also, some existing upsampling methods, e.g., ABIU [ 22]
ignore channel specificity, which utilizes the same weight
for the same position of all channels, which is unsuitable
for pansharpening due to the significant difference among
spectral image channels. In summary, these existing up-
sampling methods suffer from either insufficient utilization
of information (i.e., global information of LRMS, structure
information of PAN) or incomplete modeling of the prob-
lem (i.e., channel specificity issue).
To address the aforementioned problems, we propose
a novel probability-based global cross-modal upsampling
method (PGCU) to exploit cross-modal and global infor-
mation while considering channel specificity. The reason
why we utilize probabilistic modeling is that pansharpening
is essentially an ill-posed image inverse problem. Proba-
bilistic modeling can be used to better adapt to the char-
acteristics of the problem itself. Specifically, an approxi-
mate global discrete distribution value is sampled from the
pixel value space for each channel which can thus charac-
terize the common property of each channel and the dis-
tinctive property of different channels. Then, we establish a
cross-modal feature vector for each pixel in the upsampled
HRMS image and discrete distribution value, using not only
the LRMS image but also the PAN image. Inspired by the
main idea of Transformer [ 36], we utilize vector similarity
to calculate the probability value for each pixel on its chan-
nel distribution. Finally, PGCU calculates the pixel valuesof the upsampled image by taking the expectation.
To implement the PGCU method, we design a network
module containing three blocks, i.e., information extraction
(IE) module block, distribution and expectation estimation
(DEE) block, and fine adjustment (FA) block. Firstly, IE ex-
tracts spectral and spatial information from LRMS and PAN
images to generate channel distribution value and cross-
modal information. Next, DEE utilizes this information to
construct cross-modal feature vectors for each pixel in the
upsampled image and generate the distribution value, re-
spectively. Then, they are used to estimate the distribution
probability for each pixel in the upsampled image. Finally,
FA further compensates for using the local information and
channel correlation of the upsampled image.
To further explore the results obtained by PGCU, we uti-
lize information theory to analyze pixel distribution. Specif-
ically, by clustering pixels of the obtained upsampled image
using JS divergence as the distance measurement, the spa-
tial non-local correlation property of the image can be eas-
ily observed. Besides, by visualizing the information en-
tropy image of each channel in the upsampled image, chan-
nel specificity can be easily observed as well, which also
verifies that the PGCU method indeed learns the difference
among channels.
To sum up, the contributions of this work are as follows:
• We propose a novel probability-based upsampling
model for pan-sharpening. This model assumes each
pixel of the upsampled image to obey a probability dis-
tribution given the LRMS image and PAN image.
• We design a new upsampling network module to im-
plement the probability-based upsampling model. The
module can fully exploit the global information of
LRMS and the cross-modal information of PAN. As
far as we know, PGCU is the first upsampling module
specifically designed for pan-sharpening.
• Extensive experiments verify that the PGCU module
can be embedded into the existing SOTA pansharpen-
ing networks to improve their performance in a plug-
and-play manner. Also, the PGCU method is a univer-
sal upsampling method and has potential application in
other guided image super-resolution tasks.
|
Zhao_Few-Shot_Class-Incremental_Learning_via_Class-Aware_Bilateral_Distillation_CVPR_2023 | Abstract
Few-Shot Class-Incremental Learning (FSCIL) aims to
continually learn novel classes based on only few train-ing samples, which poses a more challenging task than the
well-studied Class-Incremental Learning (CIL) due to datascarcity. While knowledge distillation, a prevailing tech-nique in CIL, can alleviate the catastrophic forgetting of
older classes by regularizing outputs between current andprevious model, it fails to consider the overfitting risk ofnovel classes in FSCIL. To adapt the powerful distillation
technique for FSCIL, we propose a novel distillation struc-ture, by taking the unique challenge of overfitting into ac-
count. Concretely, we draw knowledge from two comple-mentary teachers. One is the model trained on abundantdata from base classes that carries rich general knowledge,which can be leveraged for easing the overfitting of cur-
rent novel classes. The other is the updated model fromlast incremental session that contains the adapted knowl-edge of previous novel classes, which is used for alleviat-
ing their forgetting. To combine the guidances, an adaptivestrategy conditioned on the class-wise semantic similari-ties is introduced. Besides, for better preserving base classknowledge when accommodating novel concepts, we adopta two-branch network with an attention-based aggregationmodule to dynamically merge predictions from two com-plementary branches. Extensive experiments on 3 popularFSCIL datasets: mini -ImageNet, CIF AR100 and CUB200
validate the effectiveness of our method by surpassing ex-
isting works by a significant margin. Code is available athttps://github.com/LinglanZhao/BiDistFSCIL .
| 1. Introduction
Real-world applications often face novel data in contin-
uous stream format. In contrast, traditional models can onlymake predictions on a pre-defined label set, and are not flex-ible enough to tackle novel classes which may emerge af-ter deployment. To address this issue, Class-IncrementalLearning (CIL) has become an active area of recent re-search [ 2,13,20,26]. The main focus of CIL is to effec-
tively learn new concepts from abundant labeled samples
∗Equal contribution.†Corresponding author.
Model tModel t-1Logits
Distill
Model tModel t-1Model baseLogits
Distill
Abundant novel data
Abundant base data
Few-shot novel dataTrained
Transfer(a) CIL: catastrophic forgetting
(b) FSCIL: catastrophic forgetting & overfitting
ImprintedSemantic sim
Figure 1. Comparisons of (a) vanilla knowledge distillation in CIL
and (b) our adapted class-aware bilateral distillation for FSCIL.
and to simultaneously alleviate catastrophic forgetting over
old classes. However, the requirement of sufficient trainingdata from novel classes still makes CIL impractical in manyscenarios, especially when annotated samples are hard toobtain due to privacy or the unaffordable collecting cost.
For instance, to train an incremental model for face recog-
nition, one or only few images are uploaded for recognizingthe newly occurred person. To this end, Few-Shot Class-Incremental Learning (FSCIL) is proposed to learn novelconcepts given only a few samples [ 30]. FSCIL defines a
challenging task where abundant training samples are avail-able only in the base session for initial model pre-trainingand the model should continually absorb novel conceptsfrom few data points in each incremental session.
A prevailing technique in CIL is to leverage knowledge
distillation for alleviating the forgetting problem. The gen-eral routine is to calibrate the output logits between currentand previous model, as illustrated in Fig. 1(a). The out-
put of current model tis restrained to be consistent with the
output of model t-1in last incremental session. Neverthe-
less, such paradigm is not suitable for FSCIL [ 30,41,42],
since the scarcity of novel class samples will cause model
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11838
t-1severely overfitting to classes which occurred in that ses-
sion ( i.e., session t-1), making the model lack generaliza-
tion ability, which further leads to the biased incrementallearning in current session t.
Therefore, to adapt the powerful distillation technique
for the challenging FSCIL task, we are devoted to design-
ing a new distillation structure that can simultaneously han-dle the forgetting and overfitting challenges. To this end,we propose the class-aware bilateral distillation module,by adaptively drawing knowledge from two complementaryteachers. One of them is the base model trained on abun-
dant data from base classes. By distilling from the basemodel, we transfer the rich general knowledge learned from
base classes to the few-shot novel classes, hence easing theiroverfitting. The other teacher is the updated model in thelast session t-1, which carries the adapted knowledge of pre-
viously seen novel classes (from session 1tot-1), we can
prevent the knowledge from forgetting by distilling from
modelt-1. Moreover, a class-aware coefficient is learned to
dynamically merge the above two guidance by consideringclass-aware semantic similarities between novel and baseclasses as priors. Intuitively, the more similar between baseclasses and a novel category, the more knowledge from base
classes can be leveraged for alleviating the overfitting. Aspresented in Fig. 1(b), instead of solely utilizing last ses-
sion’s model t-1for guiding the novel class adaptation, we
selectively merge the output logits from both model t-1and
the base model as the guidance for distillation.
For further preserving base class knowledge when adapt-
ing to novel classes, an attention-based aggregation moduleis proposed to automatically combine predictions from thebase model and the current model t. Considering that the
lower layers of a convolutional neural network capture fun-damental visual patterns [ 35], we set these layers shared and
integrate the above models into a unified framework. For
clarity, we also refer to the base and the current model asthe base and novel branch, respectively. The two branchescan be viewed as two individual experts for handling sam-ples from different categories. For a test sample from baseclasses, the aggregation module will pay more attention topredictions from base branch since it specializes in baseclasses without forgetting. In contrast, the focus will bemoved to novel branch when evaluated on novel class testsamples, because novel branch is well adapted to those in-cremental classes. Our contributions are three-fold:
• To adapt the prevailing distillation technique for ad-
dressing the unique overfitting challenges posed by
FSCIL, we propose a class-aware bilateral distillationmethod by adaptively drawing knowledge from two
complementary teachers, which proves to be effectiveboth in reducing the overfitting risk and preventing theaggravated catastrophic forgetting.
• We propose a two-branch network where the twobranches are well associated by the class-aware bilat-
eral distillation and attention-based aggregation mod-ule. The framework can simultaneously accommodatenovel concepts and retain base knowledge, without so-phisticated meta-training and can be conveniently ap-plied to arbitrary pre-trained models, making it morepractical in real-world applications.
• The superiority of our approach is validated on three
public FSCIL datasets: mini -ImageNet, CIFAR100,
and CUB200 by achieving remarkable state-of-the-artperformance. For example, we surpass the second bestresult on mini -ImageNet over 3%.
|
Zheng_POTTER_Pooling_Attention_Transformer_for_Efficient_Human_Mesh_Recovery_CVPR_2023 | Abstract
Transformer architectures have achieved SOTA perfor-
mance on the human mesh recovery (HMR) from monocu-
lar images. However, the performance gain has come at the
cost of substantial memory and computational overhead. A
lightweight and efficient model to reconstruct accurate hu-
man mesh is needed for real-world applications. In this
paper, we propose a pure transformer architecture named
POoling aTtention TransformER (POTTER) for the HMR
task from single images. Observing that the conventional
attention module is memory and computationally expensive,
we propose an efficient pooling attention module, which sig-
nificantly reduces the memory and computational cost with-
out sacrificing performance. Furthermore, we design a new
transformer architecture by integrating a High-Resolution
(HR) stream for the HMR task. The high-resolution local
and global features from the HR stream can be utilized for
recovering more accurate human mesh. Our POTTER out-
performs the SOTA method METRO by only requiring 7%
of total parameters and 14% of the Multiply-Accumulate
Operations on the Human3.6M (PA-MPJPE metric) and
3DPW (all three metrics) datasets. The project webpage
ishttps://zczcwh.github.io/potter_page/ .
| 1. Introduction
With the blooming of deep learning techniques in the
computer vision community, rapid progress has been made
in understanding humans from monocular images such as
human pose estimation (HPE). No longer satisfied with de-
tecting 2D or 3D human joints from monocular images, hu-
man mesh recovery (HMR) which can estimate 3D human
pose and shape of the entire human body has drawn increas-
ing attention. Various real-world applications such as gam-
*Work conducted during an internship at OPPO Seattle Research Cen-
ter, USA.
Figure 1. HMR performance comparison with Params and MACs
on 3DPW dataset. We outperform SOTA methods METRO [17]
and FastMETRO [3] with much fewer Params and MACs. PA-
MPJPE is the Procrustes Alignment Mean Per Joint Position Error.
ing, human-computer interaction, and virtual reality (VR)
can be facilitated by HMR with rich human body informa-
tion. However, HMR from single images is extremely chal-
lenging due to complex human body articulation, occlusion,
and depth ambiguity.
Recently, motivated by the evolution of the transformer
architecture in natural language processing, Vision Trans-
former (ViT) [4] successfully introduced transformer archi-
tecture to the field of computer vision. The attention mecha-
nism in transformer architecture demonstrates a strong abil-
ity to model global dependencies in comparison to the Con-
volutional Neural Network (CNN) architecture. With this
trend, the transformer-based models have sparked a vari-
ety of computer vision tasks, including object detection
[23, 26], semantic segmentation [2, 38], and video under-
standing [22, 31] with promising results. For HMR, the
SOTA methods [3, 17] all utilize the transformer architec-
ture to exploit non-local relations among different human
body parts for achieving impressive performance.
However, one significant limitation of these SOTA HMR
methods is model efficiency. Although transformer-based
methods [17, 18] lead to great improvement in terms of ac-
curacy, the performance gain comes at the cost of a substan-
tial computational and memory overhead. The large CNN
backbones are needed for [17, 18] to extract features first.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1611
Then, computational and memory expensive transformer
architectures are applied to process the extracted features
for the mesh reconstruction. Mainly pursuing higher accu-
racy is not an optimal solution for deploying HMR models
in real-world applications such as human-computer interac-
tion, animated avatars, and VR gaming (for instance, SOTA
method METRO [17] requires 229M Params and 56.6G
MACs as shown in Fig. 1). Therefore, it is important to also
consider the memory footprint and computational complex-
ity when evaluating an HMR model.
Input
EmbeddingNormAttentionNormMLP
Spatial MLP
Input
EmbeddingNormNormMLP
(a) Attention-based
(e.g. ViT, Swin)(b) MLP-based
(e.g. MLP-Mixer)(c) PAT (ours) (d) Parameters and MACs
comparison of each blockPoolAttn
Input
EmbeddingNormNormMLPNumber of Patches: N
Embedding Dim: D
MACs
4DN2+2D2N
4D2NModule
Attention
Spatial MLP
PoolAttnParams
4D2+4D
4N2+4N
30D 27DN
If N = 196 and D = 512
MACs
180 M
200 MModule
Attention
Spatial MLP
PoolAttnParams
1.05 M
0.15 M
0.02 M 4 M
Figure 2. Transformer blocks of different models. We suppose the
number of patches (N) and the embedding dimension (D) for each
block are the same when comparing the Params and MACs.
To bridge this gap, we aim to design a lightweight end-
to-end transformer-based network for efficient HMR. Ob-
serving that the transformer blocks (attention-based ap-
proaches in Fig. 2 (a) and MLP-based approaches in Fig.
2 (b)) are usually computational and memory consuming,
we propose a Pooling Attention Transformer (PAT) block as
shown in Fig. 2 (c) to achieve model efficiency. After patch
embedding, the image input becomes X= [D,H
p,W
p],
where Dis the embedding dimension and the number of
patches is N=H
p×W
pwhen patch size is p×p. The in-
put for transformer block is often written as Xin= [N, D].
To reduce the memory and computational costs, we design
a Pooling Attention (PoolAttn) module in our PAT block.
The PoolAttn consists of patch-wise pooling attention and
embed-wise pooling attention. For the patch-wise pooling
attention block, we preserve the patches’ spatial structure
based on the input Xin= [D,H
p,W
p], then apply patch-
wise pooling attention to capture the correlation of all the
patches. For the embed-wise pooling attention block, we
maintain the 2D spatial structure of each patch (without flat-
tening to 1D embedded features). The input is reshaped to
Xin= [N, D h, Dw], where Dh×Dw=Dis the embed-
ding dimension. The embed-wise pooling attention is ap-
plied to model the dependencies of the embedding dimen-
sions in each patch. A detailed explanation is provided in
Section 3.2. The Params and MACs comparison between
the PoolAttn and conventional attention module or MLP-
based module is shown in Fig. 2 (d). Thus, PAT can reduce
the Params and MACs significantly while maintaining high
Stage i
Attnetion-
Based Blocks× n
Patch
Merging
(b) Swin style framework: number of patches from large to small, need extra Feature Pyramid Network (FPN)FPN
Patch
EmbeddingHigh Resolution, Local
1 ×
4 ×
4
Stage
1
Stage
4
···HMRLow Resolution, Global
4 ×
32 ×
32(a) ViT style framework: number of patches is fixed during each block
Patch
EmbeddingLow Resolution
×
16 ×
16Attnetion-
Based Blocks× 12
HMR
Basic streamLow Resolution, Global
4 ×
32 ×
32
Patch
EmbeddingHigh Resolution, Local
1 ×
4 ×
4
Stage
1
Stage
4
···
HMRStage i
PAT Blocks× n
Patch
Merging
PAT BlocksPatch
Split
···
High Resolution, Local
1 ×
4 ×
4High Resolution, Local and Global
1 ×
4 ×
4HR stream
(c) POTTER: maintains high-resolution while capturing both local and global correlations for HMR Features Resolution: [
16,
16]
Embedding dim: D
Total number of patches:
16 ×
16
···Figure 3. The illustration in terms of patches during each stage in
transformer architectures.
performance, which can be utilized for efficient HMR.
Equipped with PAT as our transformer block, the next
step for building an efficient and powerful transformer-
based HMR model is to design an overall architecture. The
naive approach is to apply a Vision Transformer [4] (ViT)
architecture as shown in Fig. 3 (a). The image is first
split into patches. After patch embedding, a sequence of
patches is treated as tokens for transformer blocks. But in
ViT, patches are always within a fixed scale in transformer
blocks, producing low-resolution features. For the HMR
task, high-resolution features are needed because human
body parts can vary substantially in scale. Moreover, ViT
architecture focuses on capturing the global correlation, but
the local relations can not be well modeled. Recently, Swin
[21] introduced a hierarchical transformer-based architec-
ture as shown in Fig. 3 (b). It has the flexibility to model
the patches at various scales, the global correlation can be
enhanced during hierarchical blocks. However, it also pro-
duces low-resolution features after the final stage. To obtain
high-resolution features, additional CNN networks such as
Feature Pyramid Network [19] (FPN) are required to aggre-
gate hierarchical feature maps for HMR. Thus, we propose
our end-to-end architecture as shown in Fig. 3 (c), the hier-
archical patch representation ensures the self-attention can
be modeled globally through transformer blocks with patch
merge. To overcome the issue that high-resolution represen-
tation becomes low-resolution after patch merge, we pro-
pose a High-Resolution (HR) stream that can maintain high-
resolution representation through patch split by leveraging
the local and global features from the basic stream. Finally,
the high-resolution local and global features are used for re-
constructing accurate human mesh. The entire framework
is also lightweight and efficient by applying our PAT block
as the transformer block.
Our contributions are summarized as follows:
1612
• We propose a Pooling Transformer Block (PAT) which
is composed of the Pooling Attention (PoolAttn) module
to reduce the memory and computational burden without
sacrificing performance.
• We design a new transformer architecture for HMR by
integrating a High-Resolution (HR) stream. Considering
the patch’s merging and split properties in transformer,
the HR stream returns high-resolution local and global
features for reconstructing accurate human mesh.
• Extensive experiments demonstrate the effectiveness and
efficiency of our method – POTTER. In the HMR task,
POTTER surpasses the transformer-based SOTA method
METRO [17] on Human3.6M (PA-MPJPE metric) and
3DPW (all three metrics) datasets with only 7 % of
Params and 14 % MACs.
|
Zhao_Open_Set_Action_Recognition_via_Multi-Label_Evidential_Learning_CVPR_2023 | Abstract
Existing methods for open set action recognition focus
on novelty detection that assumes video clips show a single
action, which is unrealistic in the real world. We propose a
new method for open set action recognition and novelty de-
tection via MUlti-LabelEvidential learning (MULE), that
goes beyond previous novel action detection methods by
addressing the more general problems of single or multi-
ple actors in the same scene, with simultaneous action(s)
by any actor. Our Beta Evidential Neural Network esti-
mates multi-action uncertainty with Beta densities based
on actor-context-object relation representations. An evi-
dence debiasing constraint is added to the objective func-
tion for optimization to reduce the static bias of video rep-
resentations, which can incorrectly correlate predictions
and static cues. We develop a primal-dual average scheme
update-based learning algorithm to optimize the proposed
problem and provide corresponding theoretical analysis.
Besides, uncertainty and belief-based novelty estimation
mechanisms are formulated to detect novel actions. Exten-
sive experiments on two real-world video datasets show that
our proposed approach achieves promising performance in
single/multi-actor, single/multi-action settings. Our code
and models are released at https://github.com/
charliezhaoyinpeng/mule .
| 1. Introduction
Open set human action recognition has been studied in
recent years due to its great potential in real-world appli-
cations, such as security surveillance [1], autonomous driv-
ing [34], and face recognition [26]. It differs from closed
set problems that aim to classify human actions into a prede-
fined set of known classes, since open set methods can iden-
tify samples with unseen classes with high accuracy [14].
To this end, several recent methods [4, 6, 10] are pro-
posed for open set human action recognition. As shown
in the bottom-left of Figure 1, they focus on single-actor,
single-action based recognition, assuming that each video
contains only one single action. Compared with softmax
listen to, sit
hold, talk to listen to, sit sing, stand
sit
throw Train Test
watch (a person)
Test
watch (a person) pour
watch (a person)
read
Train
close, enter
Test
ride, talk to
Train Train Test# of actions
# of actors Figure 1. Novelty detection examples of single/multiple actor(s)
with single/multiple action(s) in video [16, 38], where an actor is
identified as novel (yellow) rather than being from a known cat-
egory (cyan) in inference. Existing works [4, 6] on open set ac-
tion recognition focus on single actor associated with single action
(bottom-left), while our method can handle different situations.
thresholding [13, 22, 25] for closed set recognition, eviden-
tial neural networks (ENNs) [4, 36] can provide a princi-
pled way to jointly formulate the multi-class classification
and uncertainty modeling to measure novelty of an instance
more accurately. It assumes that class probability follows a
prior Dirichlet distribution. However, in more realistic situ-
ation with multiple actions of actor(s) (see the upper part of
Figure 1), the Dirichlet distribution does not hold because
the predicted likelihood of each action follows a binomial
distribution ( i.e., identifying either known or novel action).
In this paper, we introduce a general but understudied
problem, namely novelty detection of actor(s) with multi-
ple actions . Given real-world use cases [14, 39], the goal
is to accurately detect if actor(s) perform novel/unknown
action(s) or not. Following [43], an actor is considered un-
known if it does not contain any known action(s). Inspired
by the belief theory [17, 45], we propose a new framework
named MUlti-LabelEvidential learning (MULE), which is
composed of three modules: Actor-Context-Object Rela-
tion modeling (ACO-R), Beta Evidential Neural Network
(Beta-ENN), and Multi-label Evidence Debiasing Con-
straint (M-EDC). First, we build ACO-R representation to
exploit the actors’ interactions with the surrounding objects
and the context. Then, we use Beta-ENN to estimate the
evidence of known actions, and quantify the predictive un-
certainty of actions so that unknown actions would incur
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22982
high uncertainty, i.e., lack of confidence for known predic-
tions. Here, the evidence indicates actions closest to the
predicted one in the feature space and are used to support
the decision-making [36]. Instead of relying on Dirichlet
distribution [4], the evidence in Beta-ENN is regarded as
parameters of a Beta distribution which is a conjugate prior
of the Binomial likelihood.
Additionally, in open set recognition, static bias [21]
may bring a false correlation between the prediction and
static cues, such as scenes, resulting in inferior generaliza-
tion capability of a model. Therefore, the M-EDC is added
to the objective function of our framework to reduce the
static bias for video actions. We propose a duality-based
learning algorithm to optimize the network. Specifically,
we apply an averaging scheme to proximate primal opti-
mal solutions. The primal and dual parameters are updated
interactively, where the primal parameters regard model ac-
curacy and dual parameters adjust model debiasing. The
theoretical analysis shows the convergence of the primal so-
lution sequence and gives bounds for both the loss function
and the violation of the debiasing constraint in MULE. Ac-
cording to the proposed uncertainty and belief based novelty
estimation mechanisms, our model outperforms the state-
of-the-art on two action recognition datasets ( i.e., A V A [16]
and Charades [38]) in terms of novelty detection. The main
contributions of this work are summarized:
• A new framework MULE is proposed for open set action
recognition in videos that contains either a single or mul-
tiple actors associated with one or more actions. To the
best of our knowledge, this is the first study to detect ac-
tors with multiple unknown actions.
• To optimize the Beta-ENN, we develop a primal-dual av-
erage scheme update algorithm, with theoretical guaran-
tees on the convergence of the primal solution sequence
and bounds for both the loss function and the violation of
the debiasing constraint.
• We introduce four novelty estimation mechanisms to cal-
culate novelty score and achieve better performance on
novel action detection compared with existing methods.
|
Zhou_Revisiting_Prototypical_Network_for_Cross_Domain_Few-Shot_Learning_CVPR_2023 | Abstract
Prototypical Network is a popular few-shot solver that
aims at establishing a feature metric generalizable to novelfew-shot classification (FSC) tasks using deep neural net-works. However , its performance drops dramatically whengeneralizing to the FSC tasks in new domains. In this study,we revisit this problem and argue that the devil lies in the
simplicity bias pitfall in neural networks. In specific, the
network tends to focus on some biased shortcut features(e.g., color , shape, etc.) that are exclusively sufficient to dis-
tinguish very few classes in the meta-training tasks within
a pre-defined domain, but fail to generalize across domainsas some desirable semantic features. To mitigate this prob-
lem, we propose a Local-global Distillation Prototypical
Network (LDP-net). Different from the standard Prototypi-cal Network, we establish a two-branch network to classify
the query image and its random local crops, respectively.
Then, knowledge distillation is conducted among these twobranches to enforce their class affiliation consistency. The
rationale behind is that since such global-local semantic re-lationship is expected to hold regardless of data domains,the local-global distillation is beneficial to exploit somecross-domain transferable semantic features for featuremetric establishment. Moreover , such local-global seman-
tic consistency is further enforced among different images of
the same class to reduce the intra-class semantic variationof the resultant feature. In addition, we propose to updatethe local branch as Exponential Moving Average (EMA)over training episodes, which makes it possible to betterdistill cross-episode knowledge and further enhance the
generalization performance. Experiments on eight cross-domain FSC benchmarks empirically clarify our argument
and show the state-of-the-art results of LDP-net. Code isavailable in https://github.com/NWPUZhoufei/LDP-Net
*F. Zhou and P. Wang contributed equally in this work.
†Corresponding author. | 1. Introduction
Prototypical Network (ProtoNet) [ 1] is a popular few-
shot classification (FSC) method, which works by establish-ing a feature metric generalizable to novel few-shot tasksusing deep neural networks. It adopts an episode-basedlearning strategy, where each episode, e.g.,N-wayK-shot,
is formulated as a contrastive learning task to identify thecorrect class for each query sample from a set of limited
classes represented by prototypes derived from few sup-port samples. Thanks to the simplicity of the framework
and appealing few-shot learning performance, ProtoNet has
gained great research attention [ 2–5].
However, the performance of typical ProtoNet declines
greatly when generalizing to FSC tasks in new domains,e.g., apply the ProtoNet trained on natural images in mini-
ImageNet [ 6] to the fine-grained bird images in CUB [7].
This severely restricts the practicality of ProtoNet in realapplications. In this work, we propose to re-inspect the in-
trinsic reason for the limited cross-domain generalization
capability of ProtoNet and revive it in the cross-domain set-ting with right medicine. Specifically, the key for cross-domain generalization, especially in few-shot setting withProtoNet, lies on exploiting some semantic information ofeach class that is invariant across different domains. To
this end, typical ProtoNet resorts to taking advantages ofthe great expressive capacity of deep neural networks for
feature learning. Obviously, it fails to exploit the desirablecross-domain transferable semantic features. In that case,what feature representation are obtained by the deep neu-
ral network? Some recent works [ 8–10] may have found
the possible answer, viz., simplicity bias. It has shown that
neural networks exclusively latch on to the simplest feature
(e.g., color, shape, etc.) and tends to ignore the complex
predictive features ( e.g., semantics of the object). Inspired
by this, we argue that the limited cross-domain generaliza-tion capacity of ProtoNet is incurred by the simplicity bias,
viz., it tends to exploit some biased shortcut features that areexclusively sufficient to distinguish very few classes in the
meta-training tasks within a pre-defined domain, but prone
to be variant across different domains.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20061
To mitigate this problem, we propose a Local-global
Distillation Prototypical Network (LDP-net) to identify im-
age features and metric that can generalize better to FSCtasks in new domains. The network employs a two-branchstructure. A global branch predicts the class affiliation for
each query image, which is akin to standard ProtoNet. A
local branch works with image patches randomly croppedfrom the query image and makes classification predictionsfor such local crops. We then perform knowledge distilla-tion between these two branches to enforce a global imageand its local patches to have consistent class affiliation pre-dictions. The rationale behind are twofold. Firstly, compar-
ing to biased visual patterns, the semantic relationship be-
tween global image and local patches can hold more gener-ally regardless of data domains. Secondly, the local-global
distillation enables embedding richer semantic information
from local features into the final global feature representa-
tion, which are proven to be more domain-invariant [ 11].
Take a step further, we apply such affiliation consistency
constraint across images belonging to the same class. Bydoing this, we can reduce the intra-class semantic varia-tion and further improve the robustness of the image feature
representations. In addition, the local branch is updated asExponential Moving Average (EMA) of the global branchto produce robust classification predictions, which enablesour model to distill cross-episode knowledge and enhancethe generalization performance. Once the model is trained,
only the global branch is retained as a feature extractor for
cross-domain FSC evaluation. Notably, by simply freezing
the feature extractor in a new domain, the proposed methodachieves state-of-the-art results on eight cross-domain FSCbenchmark datasets.
The major contributions of this study can be summarized
as follows:
• We inspect the limited cross-domain generalization ca-
pability of typical ProtoNet from the perspective ofsimplicity bias and propose a local-global knowledgedistillation framework to effectively mitigate this prob-lem.
• The proposed LDP-Net has insightful and innovative
designs and can learn a robust feature metric that gen-eralizes better to FSC tasks in new domains.
• The proposed LDP-Net achieves state-of-the-art per-
formance on a set of cross-domain FSC benchmarks.
|
Zheng_Open-Category_Human-Object_Interaction_Pre-Training_via_Language_Modeling_Framework_CVPR_2023 | Abstract
Human-object interaction (HOI) has long been plagued
by the conflict between limited supervised data and a vast
number of possible interaction combinations in real life.
Current methods trained from closed-set data predict HOIs
as fixed-dimension logits, which restricts their scalability
to open-set categories. To address this issue, we introduce
OpenCat, a language modeling framework that reformu-
lates HOI prediction as sequence generation. By convert-
ing HOI triplets into a token sequence through a serial-
ization scheme, our model is able to exploit the open-set
vocabulary of the language modeling framework to pre-
dict novel interaction classes with a high degree of free-
dom. In addition, inspired by the great success of vision-
language pre-training, we collect a large amount of weakly-
supervised data related to HOI from image-caption pairs,
and devise several auxiliary proxy tasks, including soft re-
lational matching and human-object relation prediction, to
pre-train our model. Extensive experiments show that our
OpenCat significantly boosts HOI performance, particu-
larly on a broad range of rare and unseen categories.
| 1. Introduction
Human-object interaction (HOI) task [ 5,6], whose out-
put is usually in the format of a triplet: <human, relation,
object>, has drawn increasing attention due to its crucial
role in scene understanding. As humans, we have a rich vo-
cabulary to describe one human-object relation in various
ways (e.g., near, next to, close to). We can also recognize
different combinations of HOI triplets in our real-life sce-
narios. However, current HOI methods have struggled to
achieve such º open-category º capability for a long time.
We argue that this is primarily due to two deficiencies: in-
flexible prediction manner andinsufficient supervised data .
Previous works treat HOI learning as a classification
problem where the class vocabulary must be pre-defined.
*Qin Jin is the corresponding author.
Figure 1. OpenCat reformulates HOI learning as a sequence gener-
ation task, rather than a closed-set classification task. Through the
aid of task-specific pre-training with weak supervision, our model
achieves open-category prediction on a large number of tail and
unseen HOI classes.
This approach involves projecting the input image into
fixed-dimension logits through a classifier, which restricts
the ability to identify new HOI triplets. In contrast, lan-
guage models [ 51] are more suited to predict free-form
texts, thanks to their extensive token vocabulary. Recently,
other works [ 9,62] explore to generate visual outputs using
a single language modeling objective. Inspired by this line
of research, we reformulate HOI learning as a language se-
quence generation problem as illustrated in Figure 1, which
enables our model to leverage an open-set vocabulary, gen-
erating HOI triplets with a high degree of freedom.
Moreover, HOI learning requires abundant labels for ex-
haustive HOI categories. However, due to the high cost of
labeling grounded HOIs and the natural long-tailed distribu-
tion of HOI categories, it is unrealistic to ensure sufficient
instances in each category. In fact, the two most popular
benchmarks so far, HICO-DET [ 5] and V-COCO [ 21], con-
tain 117 and 50 relation classes respectively, covering just
a small portion of the HOI categories in reality. Models
trained on such closed-set data fail to handle the large num-
ber of possible combinations of human, relation and object.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19392
Recently, researchers have explored weakly supervised or
even self-supervised vision-language (VL) pre-training to
address data scarcity. These endeavors have achieved great
success, demonstrating their generalization to novel visual
or textual concepts [ 3,12,44]. Inspired by these works,
one intuitive idea is to leverage pre-training to overcome the
problem of insufficient labeled HOI data. However, lever-
aging weakly-supervised or unsupervised data for HOI pre-
training is not trivial. An HOI model must accurately local-
ize the interaction regions in the image and recognize fine-
grained differences among massive human activities (e.g.,
stand on motorcycle vs. sit on motorcycle), which is quite
challenging to learn from merely weak supervision (e.g.,
image-caption pairs). Therefore, the pre-training frame-
work as well as the proxy tasks must be well designed.
In this work, to address the issues of inflexible prediction
manner andinsufficient supervised data in human-object
interaction tasks, we propose a novel Open -Category pre-
training framework named OpenCat
. Our framework
utilizes a serialization scheme to convert HOI triplets into a
sequence of discrete tokens and incorporates several auxil-
iary proxy tasks to enhance visual representation, including
masked language prediction (MLP), human-object relation
prediction (HRP) and human-object patch jigsaw (HPJ), all
formulated as sequence generation tasks. To enable learn-
ing interaction alignment between human and object with-
out the need for grounded HOI annotations, we further de-
vise an additional proxy task named soft relational matching
(SRM). The SRM task borrows knowledge from a VL pre-
training model [ 34,50] to create pseudo alignment labels be-
tween detected object regions and HOI triplets parsed from
the caption. With these proxy tasks, our model improves its
generalization to a wide range of novel HOIs.
Our contributions can be outlined as follows:
• We introduce OpenCat, a language modeling frame-
work to effectively model open-category HOIs.
• We collect a large amount of weakly-supervised HOI
pre-training data based sorely on textual supervision
and devise several proxy tasks to train our model.
• By adapting our model to downstream HOI tasks, we
achieve state-of-the-art performance with larger gains
observed under zero-shot and few-shot setups.
|
Zhao_Re2TAL_Rewiring_Pretrained_Video_Backbones_for_Reversible_Temporal_Action_Localization_CVPR_2023 | Abstract
Temporal action localization (TAL) requires long-form
reasoning to predict actions of various durations and com-
plex content. Given limited GPU memory, training TAL end
to end (i.e., from videos to predictions) on long videos is
a significant challenge. Most methods can only train on
pre-extracted features without optimizing them for the lo-
calization problem, consequently limiting localization per-
formance. In this work, to extend the potential in TAL net-
works, we propose a novel end-to-end method Re2TAL,
which re wires pretrained video backbones for re versible
TAL.Re2TAL builds a backbone with reversible modules,
where the input can be recovered from the output such
that the bulky intermediate activations can be cleared from
memory during training. Instead of designing one single
type of reversible module, we propose a network rewiring
mechanism, to transform any module with a residual con-
nection to a reversible module without changing any pa-
rameters. This provides two benefits: (1) a large vari-
ety of reversible networks are easily obtained from exist-
ing and even future model designs, and (2) the reversible
models require much less training effort as they reuse the
pre-trained parameters of their original non-reversible ver-
sions. Re2TAL, only using the RGB modality, reaches
37.01% average mAP on ActivityNet-v1.3, a new state-of-
the-art record, and mAP 64.9% at tIoU=0.5 on THUMOS-
14, outperforming all other RGB-only methods. Code is
available at https://github.com/coolbay/Re2TAL.
| 1. Introduction
Temporal Action Localization (TAL) [36,53,73] is a fun-
damental problem of practical importance in video under-
standing. It aims to bound semantic actions within start
and end timestamps. Localizing such video segments is
very useful for a variety of tasks such as video-language
grounding [23,56], moment retrieval [9,21], video caption-
ing [30, 50]. Since video actions have a large variety of
Figure 1. Illustration of TAL network activations in train-
ing. Top: non-reversible network stores activations of all layers
in memory. Bottom: reversible network only needs to store the
activations of inter-stage downsampling layers. Backbone activa-
tions dominate memory occupation, compared to Localizer.
temporal durations and content, to produce high-fidelity lo-
calization, TAL approaches need to learn from a long tem-
poral scope of the video, which contains a large number of
frames. To accommodate all these frames along with their
network activations in GPU memory is extremely challeng-
ing, given the current GPU memory size ( e.g. the commod-
ity GPU GTX1080Ti only has 11GB). Often, it is impossi-
ble to train one video sequence on a GPU without substan-
tially downgrading the video spatial/temporal resolutions.
To circumvent the GPU-memory bottleneck, most TAL
methods deal with videos in two isolated steps ( e.g. [4, 67,
69, 71–73]). First is a snippet-level feature extraction step,
which simply extracts snippet representations using a pre-
trained video network (backbone) in inference mode. The
backbone is usually a large neural network trained for an
auxiliary task on a large dataset of trimmed video clips
(e.g., action recognition on Kinetics-400 [28]). The second
step trains a localizer on the pre-extracted features. In this
way, only the activations of the TAL head need to be stored
in memory, which is tiny compared to those of the back-
bone (see the illustration of the activation contrast between
backbone and localizer in Fig. 1). However, this two-step
strategy comes at a steep price. The pre-extracted features
can suffer from domain shift from the auxiliary pre-training
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10637
task/data to TAL, and do not necessarily align with the rep-
resentation needs of TAL. This is because they cannot be
finetuned and must be used as-is in their misaligned state
for TAL. A better alternative is to jointly train the backbone
and localizer end to end. But as mentioned earlier, the enor-
mous memory footprint of video activations in the backbone
makes it extremely challenging. Is there a way for end-to-
end training without compromising data dimensionality?
Reversible networks [20, 25, 31, 48] provide an elegant
solution to drastically reduce the feature activation mem-
ory during training. Their input can be recovered from the
output via a reverse computation. Therefore, the interme-
diate activation maps, which are used for back propagation,
do not need to be cached during the forward pass (as il-
lustrated in Fig. 1). This offers a promising approach to
enable memory-efficient end-to-end TAL training, and var-
ious reversible architectures have been proposed, such as
RevNet [20], and RevViT [48]. However, these works de-
sign a specific reversible architecture and train for a partic-
ular dataset. Due to their new architecture, they also need
to train the networks from scratch, requiring a significant
amount of compute resources.
Conversely, it would be beneficial to be able to convert
existing non-reversible video backbones to reversible ones,
which would (1)avail a large variety of architectures and
(2)allow us to reuse the large compute resources that had
already been invested in training the non-reversible video
backbones. Since pre-trained video backbones are a crucial
part of TAL, the ability to convert off-the-shelf backbones
to reversible ones is a key to unleash their power in this task.
In this work, for end-to-end TAL , we propose a princi-
pled approach to Re wire the architectural connections of a
pre-trained non-reversible backbone to make it Re versible,
dubbed Re2TAL. Network modules with a residual connec-
tion (res-module for short), such as a Resnet block [22] or a
Transformer MLP/attention layer [14], are the most popular
design recently. Given any network composed of resid-
ual modules, we can apply our rewiring technique to
convert it to a corresponding reversible network without
introducing or removing any trainable parameters. Instead
of training from scratch, our reversible network can reuse
the non-reversible network’s parameters and only needs a
small number of epochs for finetuning to reach similar per-
formance. We summarize our contributions as follows.
(1) We propose a novel approach to construct and train
reversible video backbones parsimoniously by architec-
tural rewiring from an off-the-shelf pre-trained video back-
bone. This not only provides a large collection of reversible
candidates, but also allows reusing the large compute re-
sources invested in pre-training these models. We apply
our rewiring technique to various kinds of representative
video backbones, including transformer-based Video Swin
and ConvNet-based Slowfast, and demonstrate that our re-versible networks can reach the same performance of their
non-reversible counterparts with only minimum finetuning
effort (as low as 10 epochs compared to 300 epochs for
training from scratch).
(2) We propose a novel approach for end-to-end TAL
training using reversible video networks. Without sacrific-
ing spatial/temporal resolutions or network capability, our
proposed approach dramatically reduces GPU memory us-
age, thus enabling end-to-end training on one 11GB GPU.
We demonstrate on different localizers and different back-
bone architectures that we significantly boost TAL perfor-
mance with our end-to-end training compared to traditional
feature-based approaches.
(3) With our proposed Re2TAL, we use recent localiz-
ers in the literature to achieve a new state-of-the-art perfor-
mance, 37.01% average mAP on ActivityNet-v1.3. We also
reach the highest mAP among all methods that only use the
RGB modality on THUMOS-14, 64.9%at tIoU = 0.5, out-
performing concurrent work TALLFormer [10].
|
Zheng_HairStep_Transfer_Synthetic_to_Real_Using_Strand_and_Depth_Maps_CVPR_2023 | Abstract
In this work, we tackle the challenging problem of
learning-based single-view 3D hair modeling. Due to the
great difficulty of collecting paired real image and 3D hair
data, using synthetic data to provide prior knowledge for
real domain becomes a leading solution. This unfortunately
introduces the challenge of domain gap. Due to the inherent
difficulty of realistic hair rendering, existing methods typi-
cally use orientation maps instead of hair images as input
to bridge the gap. We firmly think an intermediate represen-
tation is essential, but we argue that orientation map using
the dominant filtering-based methods is sensitive to uncer-
tain noise and far from a competent representation. Thus,
we first raise this issue up and propose a novel intermedi-
ate representation, termed as HairStep , which consists of a
strand map and a depth map. It is found that HairStep not
only provides sufficient information for accurate 3D hair
modeling, but also is feasible to be inferred from real im-
ages. Specifically, we collect a dataset of 1,250 portrait im-
ages with two types of annotations. A learning framework
is further designed to transfer real images to the strand map
and depth map. It is noted that, an extra bonus of our new
dataset is the first quantitative metric for 3D hair modeling.
*Corresponding author: [email protected] experiments show that HairStep narrows the domain
gap between synthetic and real and achieves state-of-the-
art performance on single-view 3D hair reconstruction.
| 1. Introduction
High-fidelity 3D hair modeling is a critical part in the
creation of digital human. A hairstyle of a person typically
consists of about 100,000 strands [1]. Due to the complex-
ity, high-quality 3D hair model is expensive to obtain. Al-
though high-end capture systems [9, 18] are relatively ma-
ture, it is still difficult to reconstruct satisfactory 3D hair
with complex geometries.
Chai et al. [3,4] first present simple hair modeling meth-
ods from single-view images, which enable the acquisition
of 3D hair more user-friendly. But these early systems re-
quire extra input such as user strokes. Moreover, they only
work for visible parts of the hair and fail to recover in-
visible geometries faithfully. Recently, retrieval-based ap-
proaches [2, 10] reduce the dependency of user input and
improve the quality of reconstructed 3D hair model. How-
ever, the accuracy and efficiency of these approaches are
directly influenced by the size and diversity of the 3D hair
database.
Inspired by the advances of learning-based shape recon-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12726
struction, 3D strand models are generated by neural net-
works as explicit point sequences [45], volumetric orien-
tation field [25, 29, 40], and implicit orientation field [36]
from single-view input. With the above evolution of 3D hair
representations, the quality of recovered shape has been im-
proved significantly. As populating pairs of 3D hair and real
images is challenging [45], existing learning-based meth-
ods [25, 29, 36, 39, 45] are just trained on synthetic data be-
fore applying on real portraits. However, the domain gap
between rendered images (from synthetic hair models) and
real images has a great and negative impact on the quality
of reconstructed results. 3D hairstyles recovered by these
approaches often mismatch the given images in some im-
portant details (e.g., orientation, curliness, and occlusion).
To narrow the domain gap between the synthetic data
and real images, most existing methods [36, 37, 40, 45] take
2D orientation map [22] as an intermediate representation
between the input image and 3D hair model. However, this
undirected 2D orientation map is ambiguous in growing di-
rection and loses 3D hints given in the image. More impor-
tantly, it relies on image filters, which leads to noisy orien-
tation maps. In this work, we re-consider the current issues
in single-view 3D hair modeling and believe that it is neces-
sary to find a more appropriate intermediate representation
to bridge the domain gap between real and synthetic data.
This representation should provide enough information for
3D hair reconstruction. Also, it should be domain invariant
and can be easily obtained from real image.
To address the above issues, we propose HairStep , a
strand-aware and depth-enhanced hybrid representation for
single-view 3D hair modeling. Motivated by how to gener-
ate clean orientation maps from real images, we annotate
strand maps (i.e., directed 2D orientation maps) for real
images via drawing well-aligned dense 2D vector curves
along the hair. With this help, we can predict directed and
clean 2D orientation maps from input single-view images
directly. We also need an extra component of the inter-
mediate representation to provide 3D information for hair
reconstruction. Inspired by depth-in-the-wild [5], we an-
notate relative depth information for the hair region of real
portraits. But depth learned from sparse and ordinal an-
notations has a non-negligible domain gap against the syn-
thetic depth. To solve this, we propose a weakly-supervised
domain adaptive solution based on the borrowed synthetic
domain knowledge. Once we obtain the strand map and
depth map, we combine them together to form HairStep .
Then this hybrid representation will be fed into a network
to learn 3D orientation field and 3D occupancy field of 3D
hair models in implicit way. Finally, the 3D strand models
can be synthesized from these two fields. The high-fidelity
results are shown in Fig. 1. We name our dataset of hair im-
ages with strand annotation as HiSa and the one with depth
annotation as HiDa for convenience.Previous methods are mainly evaluated on real inputs
through the comparison of the visual quality of recon-
structed 3D hair and well-prepared user study. This subjec-
tive measurement may lead to unfair evaluation and biased
conclusion. NeuralHDHair [36] projects the growth direc-
tion of reconstructed 3D strands, and compares with the 2D
orientation map filtered from real image. This is a notewor-
thy progress, but the extracted orientation map is noisy and
inaccurate. Moreover, only 2D growing direction is evalu-
ated and 3D information is ignored. Based on our annota-
tions, we propose novel and objective metrics for the eval-
uation of single-view 3D hair modeling on realistic images.
We render the recovered 3D hair model to obtain strand and
depth map, then compare them with our ground-truth anno-
tations. Extensive experiments on our real dataset and the
synthetic 3D hair dataset USC-HairSalon [10] demonstrate
the superiority of our novel representation.
The main contributions of our work are as follows:
• We first re-think the issue of the significant domain gap
between synthetic and real data in single-view 3D hair
modeling, and propose a novel representation HairStep .
Based on it, we provide a fully-automatic system for
single-view hair strands reconstruction which achieves
state-of-the-art performance.
• We contribute two datasets, namely HiSa andHiDa , to
annotate strand maps and depth for 1,250 hairstyles of
real portrait images. This opens a door for future research
about hair understanding, reconstruction and editing.
• We carefully design a framework to generate HairStep
from real images. More importantly, we propose a
weakly-supervised domain adaptive solution for hair
depth estimation.
• Based on our annotations, we introduce novel and fair
metrics to evaluate the performance of single-view 3D
hair modeling methods on real images.
|
Zhou_Multi-Granularity_Archaeological_Dating_of_Chinese_Bronze_Dings_Based_on_a_CVPR_2023 | Abstract
The archaeological dating of bronze dings has played a
critical role in the study of ancient Chinese history. Current
archaeology depends on trained experts to carry out bronze
dating, which is time-consuming and labor-intensive. For
such dating, in this study, we propose a learning-based ap-
proach to integrate advanced deep learning techniques and
archaeological knowledge. To achieve this, we first collect
a large-scale image dataset of bronze dings, which contains
richer attribute information than other existing fine-grained
datasets. Second, we introduce a multihead classifier and a
knowledge-guided relation graph to mine the relationship
between attributes and the ding era. Third, we conduct
comparison experiments with various existing methods, the
results of which show that our dating method achieves a
state-of-the-art performance. We hope that our data andapplied networks will enrich fine-grained classification re-
search relevant to other interdisciplinary areas of expertise.
The dataset and source code used are included in our sup-
plementary materials, and will be open after submission
owing to the anonymity policy. Source codes and data are
available at: https://github.com/zhourixin/bronze-Ding
| 1. Introduction
Dings are cauldrons used for cooking, storage, and rit-
ual offerings to gods or ancestors in ancient China, and
they are the most important species used in Chinese ritual
bronzes [35]. The archaeological dating of dings has con-
tributed to the study of ancient Chinese history. Although
the excavated bronzes are massive, dating such artifacts de-
pends on the long-term training and accumulation of ex-
pertise in archaeological typology [58]. In addition, some
*Corresponding authors
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3103
artifacts are easy to identify to a precise age and others are
difficult to identify.
For the object, we focus on a ding, the features of which
are similar and complicated in different eras, as shown in
the columns of Figure 1. We therefore consider this dat-
ing task as a fine-grained classification problem. Simulta-
neously, research into fine-grained classification is close to
that of other areas of expertise because it often requires ex-
pensive specialized data and knowledge areas, such as birds
(zoology) [51, 52] and flowers (botany) [43].
Data features and domain knowledge, particularly in ar-
chaeology, vary in different fields. In addition to the com-
mon traits of the existing fine-grained datasets, our data are
more challenging. First, our data are unbalanced and dif-
ficult to mitigate through their collection because they are
determined based on an unearthed state. Second, there are
more similarities between bronze dings of adjacent eras,
leading to the possibility of misclassifying them into fine
granularity adjacent eras beyond a coarse granularity. In
other words, compared to other fine-grained classification
data, our data have a larger intra-class difference and a
smaller inter-class difference between adjacent eras. Third,
the attributes and eras are intertwined and the relations are
more complex. Each period of bronze dings has multiple
shapes and characteristics, and each shape and characteris-
tic correspond to multiple periods of bronze dings, leading
to the impracticality of making simple judgments regarding
the period based on the shape and characteristic. Existing
fine-grained classification methods therefore struggle when
applying our data.
To address these issues, we make the following contribu-
tions in this study:
• We collect an image dataset of 3690 bronze dings with
rich annotations made by bronze experts, including the
era (4 course-grained dynasties and 11 fine-grained
periods), attributes (29 shapes and 96 characteristics
with bounding boxes), literature, location of excava-
tion, and the museum where they are displayed.
• We build an end-to-end multihead network to solve
this multi-granularity task. The two heads combine
coarse- and fine-grained features in a bidirectional
manner with a gradient truncated addition to improve
the performance at both granularities. The outputs of
other two heads, the shape and characteristic nodes, are
added to a knowledge-guided relation graph to embed
the domain knowledge into our network,
• We propose exploiting these rich attributes following
archaeological knowledge by employing the focal-type
probability classification loss and indicate the ineffec-
tiveness of simply concatenating external information.
• We achieve the best performance in terms of the dating
accuracy, outperforming other state-of-the-art (SOTA)
fine-grained classification methods. |
Zhao_PoseFormerV2_Exploring_Frequency_Domain_for_Efficient_and_Robust_3D_Human_CVPR_2023 | Abstract
Recently, transformer-based methods have gained sig-
nificant success in sequential 2D-to-3D lifting human pose
estimation. As a pioneering work, PoseFormer captures
spatial relations of human joints in each video frame and
human dynamics across frames with cascaded transformer
layers and has achieved impressive performance. However,
in real scenarios, the performance of PoseFormer and its
follow-ups is limited by two factors: (a) The length of the
input joint sequence; (b) The quality of 2D joint detection.
Existing methods typically apply self-attention to all frames
of the input sequence, causing a huge computational burden
when the frame number is increased to obtain advanced es-
timation accuracy, and they are not robust to noise natu-
rally brought by the limited capability of 2D joint detectors.
In this paper, we propose PoseFormerV2, which exploits a
compact representation of lengthy skeleton sequences in the
frequency domain to efficiently scale up the receptive field
and boost robustness to noisy 2D joint detection. With min-
imum modifications to PoseFormer, the proposed method
effectively fuses features both in the time domain and fre-
quency domain, enjoying a better speed-accuracy trade-off
than its precursor. Extensive experiments on two benchmark
datasets (i.e., Human3.6M and MPI-INF-3DHP) demon-
strate that the proposed approach significantly outperforms
the original PoseFormer and other transformer-based vari-
ants. Code is released at https://github.com/
QitaoZhao/PoseFormerV2 .
| 1. Introduction
3D human pose estimation (HPE) aims at localizing
human joints in 3-dimensional space based on monocular
videos (without intermediate 2D representations) [23,25] or
2D human joint sequences (referred to as 2D-to-3D lifting
*Work was done while Qitao was an intern mentored by Chen Chen.
↓
↓0.9↓
1.6Speedup 4.6x0.91.0↓
↓1.6↓1.2↓1.1↓1.6↓0.8↓0.9Figure 1. Comparisons of PoseFormerV2 and PoseFormerV1 [41]
on Human3.6M [12]. RF denotes Receptive Field and k×RF indi-
cates that the ratio between the full sequence length and the num-
ber of frames as input into the spatial encoder of PoseFormerV2
isk,i.e., the RF of the spatial encoder is expanded by k×with a
few low-frequency coefficients of the full sequence. The proposed
method outperforms PoseFormerV1 by a large margin in terms of
speed-accuracy trade-off, and the larger kbrings more significant
improvements, e.g., 4.6×speedup with the kof 27.
approaches) [5,17,33,39]. With the large availability of 2D
human pose detectors [6, 24] plus the lightweight nature of
2D skeleton representation of humans, lifting-based meth-
ods are now dominant in 3D human pose estimation. Com-
pared to raw monocular videos, 2D coordinates of human
joints in each video frame are much more memory-friendly,
making it possible for lifting-based methods to utilize a long
joint sequence to boost pose estimation accuracy.
Transformers [32] first gain huge success in the field of
natural language processing (NLP) [3, 7] and then extend
their capacity to the computer vision community, becom-
ing the de facto approach for several vision tasks, e.g., im-
age classification [8, 18, 31], object detection [4, 42] and
video recognition [1,2,38]. The discreteness of human joint
representation and the requirement for long-range temporal
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8877
…2Dposesequence(e.g.,9frames)Temporal Transformer EncoderSpatialTransformer123789PE1PE2PE3PE8PE9PE7…RegressionHead3Dposeforthecenterframe
Spatial/Temporal Transformer EncoderMLPLayer Norm
Layer Norm Multi-HeadAttentionL×
TemporalPositionalEmbeddingFigure 2. Overview of PoseFormerV1. PoseFormerV1 mainly
consists of two modules: the spatial transformer encoder and the
temporal transformer encoder. The temporal encoder of Pose-
FormerV1 applies self-attention to all frames given a 2D joint se-
quence for human motion modeling.
Table 1. The computational cost and performance drop brought
by replacing ground-truth 2D detection with CPN [6] 2D pose
detection for the SOTA transformer-based methods. The perfor-
mance drop is reported on Human3.6M dataset (Protocol 1) [12].
RF: Receptive Field, sharing the same meaning as that in Fig. 1.
MethodSeq.
LengthGFLOPsPerform.
Drop (mm)
PoseFormerV1 [41] ICCV’21 81 1.36 13.0
StridedTransformer [14] TMM’22 243 1.37 15.2
MixSTE [40] CVPR’22 81 92.46 16.5
MHFormer [15] CVPR’22 81 3.12 11.8
P-STMO [29] ECCV’22 243 1.74 13.5
PoseFormerV2 (9×RF) 81 0.35 8.2
PoseFormerV2 (27×RF) 81 0.12 9.7
dependency modeling in a skeleton sequence make trans-
formers an excellent fit for lifting-based human pose esti-
mation. Previous works [14, 15, 29, 40, 41] have adopted
transformers as the backbone for 3D human pose estima-
tion and shown promising results.
As the pioneering work among transformer-based meth-
ods, PoseFormer [41] factorizes joint sequence feature ex-
traction into two stages (see Fig. 2) and outperforms tradi-
tional convolution-based approaches. First, all joints within
each frame are linearly projected into high-dimensional
vectors ( i.e., joint tokens) as input into the spatial trans-
former encoder. The spatial encoder builds up inter-joint
dependencies in single frames with the self-attention mech-
anism. In the second stage, joint tokens of each frame are
combined as one frame token, serving as input to the tem-
poral encoder for human motion modeling across all frames
in sequence. More details are included in Sec. 3.1.
Despite its capacity, the performance of PoseFormer
(and other transformer-based methods) is limited by two
crucial factors. (a)The length (number of frames) of the
input 2D skeleton sequence. State-of-the-art transformer-
based methods typically use extremely long sequences to
obtain advanced performance, e.g., 81 frames for Pose-
Former [41], 243 frames for P-STMO [14] and 351 framesfor MHFormer [15]. However, densely applying self-
attention to such long sequences is highly computation-
ally expensive, e.g., the single-epoch wall-time training
cost of 3-frame PoseFormer is ∼5 minutes while for 81-
frame PoseFormer the cost surges to ∼1.5 hour on an RTX
3090 GPU. (b)The quality of 2D joint detection. 2D joint
detectors inevitably introduce noise due to bias in their
training dataset and the temporal inconsistency brought by
the single-frame estimation paradigm. For example, Pose-
Former achieves 31.3mm MPJPE (Mean Per Joint Position
Error) using the ground-truth 2D detection on the Hu-
man3.6M dataset [12]. This result drops significantly to
44.3mm when the clean input is replaced by the CPN [6]
2D pose detection. In practice, the long-sequence inference
may be unaffordable for hardware deployment on resource-
limited devices such as AR/VR headsets and high-quality
2D detection is hard to obtain. More quantitative results
about the efficiency to process long sequences and the ro-
bustness to noisy 2D joint detection of existing transformer-
based methods are available in Table 1.
Driven by these practical concerns, we raise two impor-
tant research questions:
•Q1: How to efficiently utilize long joint sequences for bet-
ter estimation precision?
•Q2: How to improve the robustness of the model against
unreliable 2D pose detection?
Few works have tried to answer either of these two ques-
tions by incorporating hand-crafted modules, e.g., the
downsampling-and-uplifting module [9] that only processes
a proportion of video frames for improved efficiency, the
multi-hypothesis module [15] to model the depth ambiguity
of body parts and the uncertainty of 2D detectors. How-
ever, none of them manages to find a single solution to these
two questions simultaneously, and even worse, a paradox
seemingly exists between solutions to the questions above,
e.g., multiple hypotheses [15] improve robustness but bring
additional computation cost (see also Table 1).
In this paper, we present our initial attempt to “kill” two
birds with one stone . With restrained modifications to the
prior art PoseFormer, we show that the appropriate form
of representation for input sequences might be the key to
answering these questions simultaneously. Specifically, we
shed light on the barely explored frequency domain in 3D
HPE literature and propose to encode the input skeleton se-
quences into low-frequency coefficients. The insight be-
hind this representation is surprisingly simple: On the one
hand, low-frequency components are enough to represent
the entire visual identity [34, 37] ( e.g., 2D images in im-
age compression and joint trajectories in this case), thus re-
moving the need for expensive all-frame self-attention; On
the other, the low-frequency representation of the skeleton
sequence itself filters out high-frequency noise (jitters and
outliers) [19, 20] contained in detected joint trajectories.
8878
We inherit the spatial-temporal architecture from Pose-
Former but force the spatial transformer encoder to only
“see” a few central frames in a long sequence. Then we
complement “short-sighted” frame-level features (the out-
put of the spatial encoder) with global features from low-
frequency components of the complete sequence. Without
resorting to the expensive frame-to-frame self-attention for
all time steps, the temporal transformer encoder is reformu-
lated as a Time-Frequency Feature Fusion module.
Extensive experiments on two 3D human pose es-
timation benchmarks ( i.e., Human3.6M [12] and MPI-
INF-3DHP [21]) demonstrate that the proposed approach,
dubbed as PoseFormerV2 , significantly outperforms its
precursor (see Fig. 1) and other transformer-based variants
in terms of speed-accuracy trade-off and robustness to noise
in 2D joint detection. Our contributions are three-fold:
• To the best of our knowledge, we are the first to utilize a
frequency-domain representation of input joint sequences
for 2D-to-3D lifting HPE. We find this representation an
ideal fit to concurrently solve two important issues in the
field ( i.e., the efficiency to process long sequences and the
robustness to unreliable joint detection), and experimen-
tal evidence shows that this approach can easily general-
ize to other models.
• We design an effective Time-Frequency Feature Fusion
module to narrow the gap between features in the time do-
main and frequency domain, enabling us to strike a flexi-
ble balance between speed and accuracy.
• Our PoseFormerV2 outperforms other transformer-based
methods in terms of the speed-accuracy trade-off and ro-
bustness on Human3.6M and achieves the state-of-the-art
on MPI-INF-3DHP.
|
Zhou_Joint_Visual_Grounding_and_Tracking_With_Natural_Language_Specification_CVPR_2023 | Abstract
Tracking by natural language specification aims to lo-
cate the referred target in a sequence based on the nat-
ural language description. Existing algorithms solve this
issue in two steps, visual grounding and tracking, and ac-
cordingly deploy the separated grounding model and track-
ing model to implement these two steps, respectively. Such
a separated framework overlooks the link between visual
grounding and tracking, which is that the natural language
descriptions provide global semantic cues for localizing the
target for both two steps. Besides, the separated frame-
work can hardly be trained end-to-end. To handle these
issues, we propose a joint visual grounding and tracking
framework, which reformulates grounding and tracking as
a unified task: localizing the referred target based on the
given visual-language references. Specifically, we propose
a multi-source relation modeling module to effectively build
the relation between the visual-language references and the
test image. In addition, we design a temporal modeling
module to provide a temporal clue with the guidance of
the global semantic information for our model, which ef-
fectively improves the adaptability to the appearance vari-
ations of the target. Extensive experimental results on
TNL2K, LaSOT, OTB99, and RefCOCOg demonstrate that
our method performs favorably against state-of-the-art al-
gorithms for both tracking and grounding. Code is avail-
able at https://github.com/lizhou-cs/JointNLT.
| 1. Introduction
Tracking by natural language specification [18] is a task
aiming to locate the target in every frame of a sequence ac-
cording to the state specified by the natural language. Com-
pared with the classical tracking task [25, 33, 34, 41] using
a bounding box to specify the target of interest, tracking
by natural language specification provides a novel human-
machine interaction manner for visual tracking. In addition,
the natural language specification also has two advantages
∗Corresponding authors: Zikun Zhou and Zhenyu He.
Grounding
Model
Tracking
Model
CropVisual Grounding
Object Tracking
(a) Separated visual grounding and tracking framework
Visual Grounding
Object Tracking
(b) Joint visual grounding and tracking frameworkGrounding Frame
Template
Search Frame“zebra running on the
grass with a black horse”
Grounding Frame
Template Search Frame“zebra running on the
grass with a black horse”
“zebra running on the
grass with a black horse”Joint Grounding
&
Tracking ModelJoint Grounding
&
Tracking Model
Figure 1. Illustration of two different frameworks for tracking by
natural language specification. (a) The separated visual ground-
ing and tracking framework, which consists of two independent
models for visual grounding and tracking, respectively. (b) The
proposed joint visual grounding and tracking framework, which
employs a single model for both visual grounding and tracking.
for the tracking task compared to the bounding box specifi-
cation. First, the bounding box only provides a static repre-
sentation of the target state, while the natural language can
describe the variation of the target for the long term. Sec-
ond, the bounding box contains no direct semantics about
the target and even results in ambiguity [32], but the natural
language can provide clear semantics of the target used for
assisting the tracker to recognize the target. In spite of the
above merits, tracking by natural language specification has
not been fully explored.
Most existing solutions [17,18,32,39] for this task could
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23151
be generally divided into two steps: (1) localizing the target
of interest according to the natural language description in
the first frame, i.e., visual grounding; (2) tracking the local-
ized target in the subsequent frames based on the target state
predicted in the first frame, i.e., visual tracking. Accord-
ingly, many algorithms [17,32,39] are designed to incorpo-
rate a grounding model and a tracking model, as shown in
Figure 1(a). Herein the grounding model performs relation
modeling between the language and vision signal to localize
the target, while the tracking model performs relation mod-
eling between the template and search region to localize the
target. The drawback of this framework is that the ground-
ing model and the tracking model are two separate parts
and work independently, ignoring the connections between
the two steps. Besides, many of them [17, 32, 39] choose
to adopt the off-the-shelf grounding model [38] or tracking
model [16] to construct their framework, which means that
the overall framework cannot be trained end-to-end.
The tracking model in most existing algorithms [17, 32,
39] predicts the target state only based on the template,
overlooking the natural language description. By contrast,
the tracking mechanism that considers both the target tem-
plate and the natural language for predicting the target state
has proven to have great potential [12, 13, 18, 31]. Such
a tracking mechanism requires the tracking model to own
the ability to simultaneously model the vision-language re-
lation and the template-search region relation. Inspired by
this tracking mechanism, we come up with the idea to build
a joint relation modeling model to accomplish the above-
mentioned two-step pipeline. Herein a joint relation model-
ing model can naturally connect visual grounding and track-
ing together and also can be trained end-to-end.
To this end, we propose a joint visual grounding and
tracking framework for tracking by natural language spec-
ification, as shown in Figure 1(b). Specifically, we look
at these two tasks from a unified perspective and reformu-
late them as a unified one: localizing the referred target
according to the given visual-language references. For vi-
sual grounding, the reference information is the natural lan-
guage, while for visual tracking, the reference information
is the natural language and historical target patch (usually
called template). Thus, the crux of this unified task is to
model the multi-source relations between the input refer-
ences and the test image, which involve the cross-modality
(visual and language) relation and the cross-time (histori-
cal target patch and current search image) relation. To deal
with this issue, we introduce a transformer-based multi-
source relation modeling module, which is flexible enough
to accommodate the different references for grounding and
tracking, to model the above relations effectively. It allows
our method to switch between grounding and tracking ac-
cording to different inputs.
In addition, to improve the adaptability to the variationsof the target, we resort to the historical prediction as they
provide the temporal clue about the recent target appearance
and propose a temporal modeling module to achieve this
purpose. Considering that the natural language specification
contains the global semantic information of the target, we
use it as guidance to assist the temporal modeling module to
focus on the target region instead of the noise in the previous
prediction results.
To conclude, we make the following contributions: (1)
we propose a joint visual grounding and tracking frame-
work for tracking by natural language specification, which
unifies tracking and grounding as a unified task and can ac-
commodate the different references of the grounding and
tracking processes; (2) we propose a semantics-guided tem-
poral modeling module to provide a temporal clue based on
historical predictions for our joint model, which improves
the adaptability of our method to the appearance varia-
tions of the target; (3) we achieve favorable performance
against state-of-the-art algorithms on three natural language
tracking datasets and one visual grounding dataset, which
demonstrates the effectiveness of our approach.
|
Zheng_Both_Style_and_Distortion_Matter_Dual-Path_Unsupervised_Domain_Adaptation_for_CVPR_2023 | Abstract
The ability of scene understanding has sparked active re-
search for panoramic image semantic segmentation. How-
ever, the performance is hampered by distortion of the
equirectangular projection (ERP) and a lack of pixel-wise
annotations. For this reason, some works treat the ERP
and pinhole images equally and transfer knowledge from
the pinhole to ERP images via unsupervised domain adap-
tation (UDA). However, they fail to handle the domain gaps
caused by: 1) the inherent differences between camera sen-
sors and captured scenes; 2) the distinct image formats
(e.g., ERP and pinhole images). In this paper, we propose a
novel yet flexible dual-path UDA framework, DPPASS, tak-
ing ERP and tangent projection (TP) images as inputs. To
reduce the domain gaps, we propose cross-projection and
intra-projection training. The cross-projection training in-
cludes tangent-wise feature contrastive training and predic-
tion consistency training. That is, the former formulates the
features with the same projection locations as positive ex-
amples and vice versa, for the models’ awareness of distor-
tion, while the latter ensures the consistency of cross-model
predictions between the ERP and TP . Moreover, adversarial
intra-projection training is proposed to reduce the inherent
gap, between the features of the pinhole images and those
of the ERP and TP images, respectively. Importantly, the
TP path can be freely removed after training, leading to
no additional inference cost. Extensive experiments on two
benchmarks show that our DPPASS achieves +1.06% mIoU
increment than the state-of-the-art approaches. https:
//vlis2022.github.io/cvpr23/DPPASS
| 1. Introduction
Increasing attention has been paid to the emerging 360◦
cameras for their omnidirectional scene perception abilities
*Corresponding author.
SyntheticReal PinholeReal Panorama
Style (Content)Distortion
Inherent GapInherent Gap & Format GapFigure 1. We tackle a new problem by addressing two types of do-
main gaps, i.e., the inherent gap (style) and format gap (distortion)
between the pinhole and panoramic (360◦) images.
with a broader field of view (FoV) than the traditional pin-
hole images [1]. Intuitively, the ability to understand the
surrounding environment from the panoramic images has
triggered the research for semantic segmentation as it is
pivotal to practical applications, such as autonomous driv-
ing [45, 50] and augmented reality [28]. Equirectangular
projection (ERP) [46] is the most commonly used projec-
tion type for the 360◦images1and can provide a complete
view of the scene. However, the ERP type suffers from se-
vere distortion in the polar regions, resulting in noticeable
object deformation. This significantly degrades the perfor-
mance of the pixel-wise dense prediction tasks, e.g., seman-
tic segmentation. Some attempts have been made to de-
sign the convolution filters for feature extraction [32, 50,53];
however, the specifically designed networks are less gener-
alizable to other spherical image data. Moreover, labeled
datasets are scarce, thus making it difficult to train effective
360◦image segmentation models.
To tackle these issues, some methods, e.g., [50] treat the
ERP and pinhole images equally, like the basic UDA task,
1Here, panoramic and 360◦images are interchangeably used.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1285
and directly alleviate the mismatch between ERP and pin-
hole images by adapting the neural networks trained in the
pinhole domain to the 360◦domain via unsupervised do-
main adaptation (UDA). For instance, DensePASS [23] pro-
poses a generic framework based on different variants of
attention-augmented modules. Though these methods can
relieve the need for the annotated 360◦image data [50],
they fail to handle the existing domain gaps caused by: 1)
diverse camera sensors and captured scenes; 2) distinct im-
age representation formats (ERP and pinhole images) and
yield unsatisfied segmentation performance. Accordingly,
we define these two types of domain gaps as the inherent
gap and format gap (See Fig. 1).
In this paper, we consider using the tangent projection
(TP) along with the ERP. It has been shown that TP, the ge-
ometric projection [7] of the 360◦data, suffers from less
distortion than the ERP. Moreover, the deep neural network
(DNN) models designed for the pinhole images can be di-
rectly applied [10]. To this end, we propose a novel dual-
path UDA framework, dubbed DPPASS, taking ERP and
TP images as inputs to each path. The reason is that the
ERP provides a holistic view while TP provides a patch-
wise view of a given scene. For this, the pinhole images
(source domain) are also transformed to the pseudo ERP
and TP formats as inputs. To the best of our knowledge, our
work takes the first effort to leverage two projection for-
mats, ERP and TP, to tackle the inherent and format gaps
for panoramic image semantic segmentation. Importantly,
the TP path can be freely removed after training, therefore,
no extra inference cost is induced.
Specifically, as shown in Fig. 2, the cross-projection
training is proposed at both the feature and prediction lev-
els for tackling the challenging format gap (Sec. 3.2). At
the feature level, the tangent-wise feature contrastive train-
ing aims at mimicking the tangent-wise features with the
same distortion and discerning the features with distinct
distortion, to further learn distortion-aware models and de-
crease the format gap. Meanwhile, the less distorted tan-
gent images are used in the prediction consistency train-
ing. It ensures the consistency between the TP predictions
and the tangent projections of the ERP predictions for mod-
els’ awareness of the distortion variations. For the long-
existing inherent gap, the intra-projection training imposes
the style and content similarities between the features from
the source and target domains for both the ERP and TP im-
ages (Sec. 3.3). As such, we can reduce the large inherent
and format gaps between the 360◦and pinhole images by
taking advantage of dual projections.
We conduct extensive experiments from the pinhole
dataset, Cityscapes [6], to two 360◦datasets: DenseP-
ASS [23] and WildPASS [44]. The experimental results
show that our framework surpasses the existing SOTA
methods by 1.06 %on the DensePASS test set. In summary,our main contributions are summarized as follows: (I) We
study a new problem by re-defining the domain gaps be-
tween 360◦images and pinhole images as two types: the
inherent gap and format gap. (II) We propose the first UDA
framework taking ERP and tangent images to reduce the
types of domain gaps for semantic segmentation. (III) We
propose corss- and intra- projection training that take the
ERP and TP at the prediction and feature levels to reduce
the domain gaps.
|
Zhao_Augmentation_Matters_A_Simple-Yet-Effective_Approach_to_Semi-Supervised_Semantic_Segmentation_CVPR_2023 | Abstract
Recent studies on semi-supervised semantic segmenta-
tion (SSS) have seen fast progress. Despite their promising
performance, current state-of-the-art methods tend to in-
creasingly complex designs at the cost of introducing more
network components and additional training procedures.
Differently, in this work, we follow a standard teacher-
student framework and propose AugSeg , a simple and clean
approach that focuses mainly on data perturbations to boost
the SSS performance. We argue that various data aug-
mentations should be adjusted to better adapt to the semi-
supervised scenarios instead of directly applying these tech-
niques from supervised learning. Specifically, we adopt a
simplified intensity-based augmentation that selects a ran-
dom number of data transformations with uniformly sam-
pling distortion strengths from a continuous space. Based
on the estimated confidence of the model on different un-
labeled samples, we also randomly inject labelled infor-
mation to augment the unlabeled samples in an adaptive
manner. Without bells and whistles, our simple AugSeg can
readily achieve new state-of-the-art performance on SSS
benchmarks under different partition protocols1.
| 1. Introduction
Supervised semantic segmentation studies [5, 6, 37, 53]
have recently achieved tremendous progress, but their suc-
cess depends closely on large datasets with high-quality
pixel-level annotations. Delicate and dense pixel-level la-
belling is costly and time-consuming, which becomes a sig-
nificant bottleneck in practical applications with limited la-
belled data. To this end, semi-supervised semantic segmen-
tation (SSS) [27, 39] has been proposed to train models on
less labelled but larger amounts of unlabeled data.
Consistency regularization [42, 43], the currently domi-
nant fundamental SSS method, effectively incorporates the
*Corresponding authors ([email protected], wangjing-
[email protected]). This work is supported by Australian Research
Council (ARC DP200103223).
1Code and logs: https://github.com/zhenzhao/AugSeg .
92 183 366 732 1464
# labeled images65.067.570.072.575.077.580.0mIOU (%)
CPS[CVPR/prime21]
ST++[CVPR/prime22]
PSMT[CVPR/prime22]
U2PL[CVPR/prime22]
AugSeg(ours)Figure 1. Comparison between current SOTAs and our simple
AugSeg on Pascal VOC 2012, using R101 as the encoder.
training on unlabeled data into standard supervised learn-
ing [16, 44]. It relies on the label-preserving data or model
perturbations to produce the prediction disagreement on
the same inputs, such that unlabeled samples can be lever-
aged to train models even if their labeled information is un-
known. Some studies in [17, 29, 50, 51] explored different
data augmentations to benefit the SSS training while works
in [7,16,46] mainly focused on various model perturbations
to obtain competitive SSS performance. On top of these
fundamental designs, recent state-of-the-art (SOTA) meth-
ods aim to integrate extra auxiliary tasks [1,47,56,57], e.g.,
advanced contrastive learning techniques, and more train-
able modules [28, 30, 36, 38], e.g. multiple ensemble mod-
els and additional correcting networks, to further improve
the SSS performance. Despite their promising performance,
SSS studies along this line come at the cost of requiring
more complex methods, e.g., extra network components or
additional training procedures.
In this paper, we break the trend of recent SOTAs
that combine increasingly complex techniques and propose
AugSeg , a simple-yet-effective method that focuses mainly
on data perturbations to boost the SSS performance. Al-
though various auto data augmentations [9,10] and cutmix-
related transformations [17, 52] in supervised learning have
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11350
MethodAugmentations More Supervision Pseudo-rectifying
SDA FT MBSL CT UCL UAFS ACN PR
CCT [44] ✓ ✓ ✓
ECS [38] ✓ ✓
SSMT [26] ✓ ✓ ✓
PseudoSeg [58] ✓ ✓
CAC [31] ✓ ✓ ✓
DARS [24] ✓ ✓ ✓
PC2Seg [56] ✓ ✓ ✓ ✓
C3-Semiseg [57] ✓ ✓ ✓ ✓
ReCo [34] ✓ ✓ ✓
CPS [7] ✓ ✓
ST++ [50] ✓ ✓
ELN [30] ✓ ✓ ✓
USRN [20] ✓ ✓ ✓ ✓
PSMT [36] ✓ ✓ ✓ ✓
U2PL [47] ✓ ✓ ✓
AugSeg (ours) ✓
Table 1. Comparison of recent SSS algorithms in terms of
“Augmentations”, “More supervision”, and “Pseudo-rectifying”
(sorted by their publication date). We explain the abbrevia-
tions as follows. “ SDA ”: Strong data augmentations, including
various intensity-based and cutmix-related augmentations, “ FT”:
Feature-based augmentations, “ MBSL ”: multiple branches, train-
ing stages, or losses, “ CT”: Co-training, “ UCL ”: unsuper-
vised contrastive learning, “ UAFS ”: uncertainty/attention filter-
ing/sampling, “ ACN ”: additional correcting networks, “ PR”:
prior-based re-balancing techniques. Note that , branches of
“more supervision” and “pseudo-rectifying” typically require
more training efforts. Differently, our method enjoys the best sim-
plicity but the highest performance.
been extensively utilized in previous SSS studies, we ar-
gue that these augmentations should be adjusted precisely
to better adapt to the semi-supervised training. On one
hand , these widely-adopted auto augmentations are essen-
tially designed for supervised paradigm and aim to search
the optimal augmentation strategies from a predefined fi-
nite discrete space. Their optimal objective is constant and
clear across the training course. However, data perturba-
tions in semi-supervised learning consist in generating pre-
diction disagreement on the same inputs, without a constant
and specific objective or a predefined discrete searching
space. Thus, we simplify existing randomAug [10] and de-
sign a highly random intensity-based augmentation, which
selects a random number of different intensity-based aug-
mentations and a random distortion strength from a contin-
uous space. On the other hand , random copy-paste [18]
among different unlabeled samples can yield effective data
perturbations in SSS, but their mixing between correspond-
ing pseudo-labels can inevitably introduce confirmation
bias [3], especially on these instances with less confident
predictions of the model. Considering the utilization effi-
ciency of unlabeled data, we simply mix labeled samples
with these less confident unlabeled samples in a random and
adaptive manner, i.e., adaptively injecting labeled informa-
tion to stabilize the training on unlabeled data. Benefiting
from the simply random and collaborative designs, AugSegrequires no extra operations to handle the distribution is-
sues, as discussed in [51].
Despite its simplicity, AugSeg obtains new SOTA perfor-
mance on popular SSS benchmarks under various partition
protocols. As shown in Figure 1, AugSeg can consistently
outperform current SOTA methods by a large margin. For
example, AugSeg achieves a high mean intersection-over-
union (mIoU) of 75.45% on classic Pascal VOC 2012 us-
ing only 183 labels compared to the supervised baseline of
59.10% and previous SOTA of 71.0% in [50]. We attribute
these remarkable performance gains to our revision – that
various data augmentations are simplified and adjusted to
better adapt to the semi-supervised scenarios. Our main
contributions are summarized as follows,
• We break the trend of SSS studies that integrate
increasingly complex designs and propose AugSeg,
a standard and simple two-branch teacher-student
method that can achieve readily better performance.
• We simply revise the widely-adopted data augmenta-
tions to better adapt to SSS tasks by injecting labeled
information adaptively and simplifying the standard
RandomAug with a highly random design.
• We provide a simple yet strong baseline for future SSS
studies. Extensive experiments and ablations studies
are conducted to demonstrate its effectiveness.
|
Zheng_Learning_Visibility_Field_for_Detailed_3D_Human_Reconstruction_and_Relighting_CVPR_2023 | Abstract
Detailed 3D reconstruction and photo-realistic relight-
ing of digital humans are essential for various applications.
To this end, we propose a novel sparse-view 3d human re-
construction framework that closely incorporates the oc-
cupancy field and albedo field with an additional visibil-
ity field–it not only resolves occlusion ambiguity in multi-
view feature aggregation, but can also be used to evalu-
ate light attenuation for self-shadowed relighting. To en-
hance its training viability and efficiency, we discretize vis-
ibility onto a fixed set of sample directions and supply it
with coupled geometric 3D depth feature and local 2D im-
age feature. We further propose a novel rendering-inspired
loss, namely TransferLoss, to implicitly enforce the align-
ment between visibility and occupancy field, enabling end-
to-end joint training. Results and extensive experiments
demonstrate the effectiveness of the proposed method, as
it surpasses state-of-the-art in terms of reconstruction ac-
curacy while achieving comparably accurate relighting to
ray-traced ground truth.
| 1. Introduction
3D reconstruction and relighting are of great impor-
tance in human digitization, especially in supporting real-
istic rendering in varying virtual environments, that can be
widely applied in AR/VR [32, 37], holographic communi-cation [24, 63], movie and gaming industry [7].
Traditional methods often require dense camera setups
using multi-view stereo, non-rigid registration and texture
mapping [9, 13]. To enhance capture realism, researchers
have extended them with additional synchronous variable
illumination systems, which aid photometric stereo for de-
tail reconstruction and material acquisition [12]. However,
these systems are often too complex, expensive and difficult
to maintain, thus preventing widespread applications.
By leveraging deep prior and neural representation, so-
phisticated dense camera setups can be reduced to a sin-
gle camera, leading to blossoms in learning-based human
reconstruction. In particular, encoding human geometry
and appearance as continuous fields using Multi-Layer Per-
ceptron (MLP) has emerged as a promising lead. Starting
from Siclope [36] and PIFu [43], a series of methods [15]
improve the reconstruction performance in speed [11, 25],
quality [44], robustness [65, 66] and light decoupling [1].
However, single-view reconstruction quality is restricted by
its inherent depth ambiguity, thus limiting its application
under view-consistent high-quality requirements.
Therefore, as the trade-off between view coverage and
system accessibility, sparse-view reconstruction has be-
come a research hotpot. The predominant practice is to
project the query point onto each view to interpolate lo-
cal features, which are then aggregated and fed to MLP
for inference [6, 16, 43, 47, 52, 61]. This method suffers
from occlusion ambiguity, where some views may well
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
216
be occluded, and mixing their features with visible ones
causes inefficient feature utilization, thus penalizing the re-
construction quality [43]. A natural solution is to filter fea-
tures based on view visibility. Human templates such as
SMPL [29] can serve as effective guidance [2,40,56,64], but
introduce additional template alignment errors and there-
fore cannot guarantee complete occlusion awareness. Func-
tion4D [61] leverages the truncated Projective Signed Dis-
tance Function (PSDF) for visibility indication, but its level
of details is susceptible to depth noise.
To this end, we directly model a continuous visibility
field, which can be efficiently learned with our proposed
framework and discretization technique using sparse-view
RGB-D input. The visibility field enables efficient visibil-
ity query, which effectively guides multi-view feature ag-
gregation for more accurate occupancy and albedo infer-
ence. Moreover, visibility can also be directly used for
light attenuation evaluation–the key ingredient in achiev-
ing realistic self-shadowing. When supervising jointly with
our novel TransferLoss, the alignment between the visibility
field and occupancy field can be implicitly enforced without
between-field constraints, such as matching visibility with
occupancy ray integral. We train our framework end-to-end
and demonstrate its effectiveness in detailed 3D human re-
construction by quantity and quality comparison with the
state-of-the-art. We directly relight the reconstructed geom-
etry with inferred visibility using diffuse Bidirectional Re-
flectance Distribution Function (BRDF) as in Fig. 1, which
achieves photo-realistic self-shadowing without any post
ray-tracing steps. To conclude, our contributions include:
• An end-to-end framework for sparse-view detailed 3D
human reconstruction that also supports direct self-
shadowed relighting.
• A novel method of visibility field learning, with the
specifically designed TransferLoss significantly im-
proves field alignment.
• A visibility-guided multi-view feature aggregation
strategy that guarantees occlusion awareness.
|
Zheng_CVT-SLR_Contrastive_Visual-Textual_Transformation_for_Sign_Language_Recognition_With_Variational_CVPR_2023 | Abstract
Sign language recognition (SLR) is a weakly supervised
task that annotates sign videos as textual glosses. Recent
studies show that insufficient training caused by the lack
of large-scale available sign datasets becomes the main
bottleneck for SLR. Most SLR works thereby adopt pre-
trained visual modules and develop two mainstream solu-
tions. The multi-stream architectures extend multi-cue vi-
sual features, yielding the current SOTA performances but
requiring complex designs and might introduce potential
noise. Alternatively, the advanced single-cue SLR frame-
works using explicit cross-modal alignment between vi-
sual and textual modalities are simple and effective, po-
tentially competitive with the multi-cue framework. In
this work, we propose a novel contrastive visual-textual
transformation for SLR, CVT-SLR, to fully explore the pre-
trained knowledge of both the visual and language modali-
ties. Based on the single-cue cross-modal alignment frame-
work, we propose a variational autoencoder (VAE) for pre-
trained contextual knowledge while introducing the com-
plete pretrained language module. The VAE implicitly
aligns visual and textual modalities while benefiting from
pretrained contextual knowledge as the traditional contex-
tual module. Meanwhile, a contrastive cross-modal align-
ment algorithm is designed to explicitly enhance the consis-
tency constraints. Extensive experiments on public datasets
(PHOENIX-2014 and PHOENIX-2014T) demonstrate that
our proposed CVT-SLR consistently outperforms existing
single-cue methods and even outperforms SOTA multi-cue
methods. The source codes and models are available at
https://github.com/binbinjiang/CVT-SLR .
*Corresponding author. | 1. Introduction
As a special visual natural language, sign language is
the primary communication medium of the deaf community
[19]. With the progress of deep learning [1, 17, 25, 39, 42],
sign language recognition (SLR) has emerged as a multi-
modal task that aims to annotate sign videos into textual
sign glosses. However, a significant dilemma of SLR is
the lack of publicly available sign language datasets. For
example, the most commonly-used PHOENIX-2014 [23]
and PHOENIX-2014T [2] datasets only include about 10K
pairs of sign videos and gloss annotations, which are far
from training a robust SLR system with full supervision as
typical vision-language cross-modal tasks [34]. Therefore,
data limitation that may easily lead to insufficient training
or overfitting problems is the main bottleneck of SLR tasks.
The development of weakly supervised SLR has wit-
nessed most of the improvement efforts focus on the visual
module (e.g., CNN) [9, 10, 15, 29, 32, 33]. Transferring pre-
trained visual networks from general domains of human ac-
tions becomes a consensus to alleviate the low-resource lim-
itation. The mainstream multi-stream SLR framework ex-
tends the pretrained visual module with multi-cue visual in-
formation [3,22,24,43,48,50], including global features and
regional features such as hands and faces in independent
streams. The theoretical support for this approach comes
from sign language linguistics, where sign language utilizes
multiple complementary channels (e.g., hand shapes, fa-
cial expressions) to convey information [3]. The multi-cue
mechanism essentially exploits hard attention to key infor-
mation, yielding the current SOTA performances. However,
the multi-cue framework is complex (e.g., cropping multi-
ple regions, requiring more parameters), and the fusion of
multiple streams might introduce additional potential noise.
Another mainstream advanced solution is the single-cue
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23141
Contextual Module
(e.g., RNN, LSTM,
Transformer)Sign VideoCTC
Gloss
Implicit Alignment
Explicit Cross-modal AlignmentContextual
FeatureExplicit Cross-modal Alignment
Visual
FeatureVisual Module
(e.g., CNN)
Pretrained AE
Translation Module
(e.g., VAE)Sign VideoCTC
GlossAE
FeatureVisual
FeatureVisual Module
(e.g., CNN)
Adapter(a)
(b)Figure 1. (a) An advanced single-cue SLR framework with explicit cross-modal alignment; (b) Our proposed single-cue SLR framework
with explicit cross-modal alignment and implicit autoencoder alignment. Both frameworks use pretrained visual features. But our frame-
work uses the autoencoder module to replace the mainstream contextual module, which not only includes the functions of the contextual
module but also can introduce complete pretrained language knowledge and implicit cross-modal alignment. To maximize the preservation
of the complete pretrained language parameters and migrated visual features, a video-gloss adapter is introduced.
cross-modal alignment framework [15, 28], which consists
of a pretrained visual module followed by a contextual mod-
ule (e.g., RNN, LSTM, Transformer) and a Connectionist
Temporal Classification (CTC) [14] based alignment mod-
ule for gloss generation, as shown in Figure 1 (a). Explicit
cross-modal alignment constraints further improve feature
interactions [15,28,38], which could be treated as a kind of
consistency between two different modalities [50] facilitat-
ing the visual module learn long-term temporal information
from contextual module [13, 37]. The cross-modal align-
ment framework is simple and effective, potentially compet-
itive with the multi-cue framework. Despite the advanced
performance of complex multi-cue architectures with pre-
trained visual modules, the cross-modal consistency is a
more elegant design for practical usage. It also implies the
potential of prior contextual linguistic knowledge, which
has been overlooked by existing SLR works.
In this work, we propose a novel contrastive visual-
textual transformation framework for SLR, called CVT-
SLR, to fully explore the pretrained knowledge of both the
visual and language modalities, as shown in Figure 1 (b).
Based on the single-cue cross-modal alignment framework,
CVT-SLR keeps the pretrained visual module but replaces
the traditional contextual module with a variational autoen-
coder (V AE). Since a full encoder-decoder architecture is
used, the V AE is responsible for learning pretrained contex-
tual knowledge based on a pseudo-translation task while in-
troducing the complete pretrained language module. In ad-
dition, the V AE maintains the consistency of input and out-
put modalities due to the form of an autoencoder, playing an
implicit cross-modal alignment role. Furthermore, inspired
by contrastive learning [4–6,34], we introduce a contrastivealignment algorithm that focuses on both positive and neg-
ative samples to enhance explicit cross-modal consistency
constraints. Extensive quantitative experiments conducted
on the public datasets PHOENIX-2014 and PHOENIX-
2014T demonstrate the advance of the proposed CVT-SLR
framework. Through ablation study and qualitative anal-
ysis, we also verify the effectiveness of introducing pre-
trained language knowledge and the new consistency con-
straint mechanism.
Our main contributions are as follows:
• A novel visual-textual transformation-based SLR
framework is proposed, which introduces fully pre-
trained language knowledge for the first time and pro-
vides new approaches for other cross-modal tasks.
• New alignment methods are proposed for cross-modal
consistency constraints: a) exploiting the special prop-
erties of the autoencoder to implicitly align visual
and textual modalities; b) introducing an explicit con-
trastive cross-modal alignment method.
• The proposed single-cue SLR framework not only out-
performs existing single-cue baselines by a large mar-
gin but even surpasses SOTA multi-cue baselines.
|
Zhu_LightedDepth_Video_Depth_Estimation_in_Light_of_Limited_Inference_View_CVPR_2023 | Abstract
Video depth estimation infers the dense scene depth from
immediate neighboring video frames. While recent works
consider it a simplified structure-from-motion (SfM) prob-
lem, it still differs from the SfM in that significantly fewer
view angels are available in inference. This setting, how-
ever, suits the mono-depth and optical flow estimation. This
observation motivates us to decouple the video depth es-
timation into two components, a normalized pose estima-
tion over a flowmap and a logged residual depth estimation
over a mono-depth map. The two parts are unified with an
efficient off-the-shelf scale alignment algorithm. Addition-
ally, we stabilize the indoor two-view pose estimation by in-
cluding additional projection constraints and ensuring suf-
ficient camera translation. Though a two-view algorithm,
we validate the benefit of the decoupling with the substantial
performance improvement over multi-view iterative prior
works on indoor and outdoor datasets. Codes and models
are available at https://github.com/ShngJZ/LightedDepth.
| 1. Introduction
Depth estimation is a fundamental task for applica-
tions such as 3D reconstruction [3], robotics [26], and au-
tonomous driving [59]. The depth is self-contained in the
scene motion brought by the camera movement. The clas-
sic SfM methods [17, 31, 37, 38, 54] hence jointly recover
the scene depth and camera poses by applying bundle-
adjustment over the entire video sequence. However, the
iterative optimization defined over all frames makes SfM a
computationally intensive method. Video depth estimation
simplifies the computation by only consuming the immedi-
ate neighboring frames. In consequence, only limited cam-
era view angles are available, as shown in Fig. 2 (a).
The limited camera views, however, suit optical flow
and monocular depth estimation. We are then motivated
to connect video depth to mono-depth and flow estimation
by decoupling the video-depth into two components. First,
we use the flowmap to estimate a normalized up-to-scale
BTS [27]SfMR [50]
DeepMLE [8]DRO [20]MaGNet [1]DeepV2D [41]DeepV2cD [22]Ours
Figure 1. Video Depth Performance Comparison on KITTI
Dataset. We mark the methods taking different numbers of frames
with different colors. We propose a two-view video depth estima-
tion method that substantially outperforms prior two-view, three-
view, and five-view methods. Our method uses a monocular depth
as initialization. The arrow marks our improvement when using
the BTS [27] as the initialization. Comparison is detailed in Tab. 1.
camera pose, i.e., camera pose with a unit-length transla-
tion vector. Second, we estimate video depth as a logged
residual over the mono-depthmap. The two components are
unified by an efficient off-the-shelf camera scale alignment
algorithm, aligning the depthmap and flowmap, making the
residual depth estimation a stereo matching.
Unlike our method, most prior video depth estimation
works [41, 46, 50, 52, 55] formulate their solutions as deep
SfM, shown in Fig. 2 (b). They can be grouped into two
types [50]. Type Imethods [41, 46, 52] execute SfM within
a fixed frame window, embedding bundle-adjustment as a
differentiable module within a network. Type IImeth-
ods [50, 55] execute a consecutive-frame SfM. They se-
quentially estimate an up-to-scale pose and an up-to-scale
depthmap. While prior works solve video depth estimation
as a simplified SfM problem, our method differs in decou-
pling the video depth estimation to two sub-tasks which are
robust to deficient camera views, i.e., flow based normalized
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5003
(a) Limited view angles of video depth
(b) Prior Multi-View
(c) Ours Two-View
Figure 2. (a) Unlike classic SfM, video depth estimation possesses significantly fewer view angles during inference. (b) Prior multi-view
video depth estimation works [ |
Zhou_NeRF_in_the_Palm_of_Your_Hand_Corrective_Augmentation_for_CVPR_2023 | Abstract
Expert demonstrations are a rich source of supervision
for training visual robotic manipulation policies, but imita-
tion learning methods often require either a large number
of demonstrations or expensive online expert supervision to
learn reactive closed-loop behaviors. In this work, we in-
troduce SPARTN (Synthetic Perturbations for Augmenting
Robot Trajectories via NeRF): a fully-offline data augmen-
tation scheme for improving robot policies that use eye-in-
hand cameras. Our approach leverages neural radiance
fields (NeRFs) to synthetically inject corrective noise into
visual demonstrations, using NeRFs to generate perturbed
viewpoints while simultaneously calculating the corrective
actions. This requires no additional expert supervision or
environment interaction, and distills the geometric informa-
tion in NeRFs into a real-time reactive RGB-only policy.
In a simulated 6-DoF visual grasping benchmark, SPARTN
improves success rates by 2.8 ×over imitation learning
without the corrective augmentations and even outperforms
some methods that use online supervision. It additionally
closes the gap between RGB-only and RGB-D success rates,
eliminating the previous need for depth sensors. In real-
world 6-DoF robotic grasping experiments from limited hu-
man demonstrations, our method improves absolute success
rates by 22.5% on average, including objects that are tra-
ditionally challenging for depth-based methods. See video
results at https://bland.website/spartn .
| 1. Introduction
Object grasping is a central problem in vision-based con-
trol and is fundamental to many robotic manipulation prob-
lems. While there has been significant progress in top-
down bin picking settings [21, 34], 6-DoF grasping of ar-
bitrary objects amidst clutter remains an open problem, and
is especially challenging for shiny or reflective objects that
are not visible to depth cameras. For example, the task of
grasping a wine glass from the stem shown in Figure 1 re-
*Equal contribution. Correspondence to [email protected] .
Figure 1. SPARTN is an offline data augmentation method for be-
havior cloning eye-in-hand visual policies. It simulates recovery
in a demonstration by using NeRFs to render high-fidelity obser-
vations (right) from noisy states, then generates corrective action
labels.
quires precise 6-DoF control (using full 3D translation and
3D rotation of the gripper) and closed-loop perception of
a transparent object. Traditional 6-DoF grasping pipelines
[8, 57] synthesize only one grasp pose and use a motion
planner to generate a collision-free trajectory to reach the
grasp [38,40,54,60]. However, the use of open-loop trajec-
tory execution prevents the system from using perceptual
feedback for reactive, precise grasping behavior. In this pa-
per, we study how to learn closed-loop policies for 6-DoF
object grasping from RGB images, which can be trained
with imitation or reinforcement learning methods [58].
Imitation learning from expert demonstrations is a sim-
ple and promising approach to this problem, but is known
to suffer from compounding errors [46]. As a result, com-
plex vision-based tasks can require online expert supervi-
sion [19, 46] or environment interaction [13, 44], both of
which are expensive and time-consuming to collect. On the
other hand, offline “feedback augmentation” methods [14,
22] can be effective at combating compounding errors, but
are severely limited in scope and thus far have not been ap-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17907
plied to visual observations. Other recent works have found
that using eye-in-hand cameras mounted on a robot’s wrist
can significantly improve the performance of visuomotor
policies trained with imitation learning [17, 20, 35], but still
do not address the underlying issue of compounding errors.
We develop an approach that helps address compounding
errors to improve vision-based policies, while building on
the success of eye-in-hand cameras.
To improve imitation learning for quasi-static tasks like
grasping, we propose a simple yet effective offline data aug-
mentation technique. For an eye-in-hand camera, the im-
ages in each demonstration trajectory form a collection of
views of the demonstration scene, which we use to train
neural radiance fields (NeRFs) [37] of each scene. Then, we
can augment the demonstration data with corrective feed-
back by injecting noise into the camera poses along the
demonstration and using the demonstration’s NeRF to ren-
der observations from the new camera pose. Because the
camera to end-effector transform is known, we can compute
corrective action labels for the newly rendered observations
by considering the action that would return the gripper to
the expert trajectory. The augmented data can be combined
with the original demonstrations to train a reactive, real-
time policy. Since the NeRFs are trained on the original
demonstrations, this method effectively “distills” the 3D in-
formation from each NeRF into the policy.
The main contribution of this work is a NeRF-based data
augmentation technique, called SPARTN (Synthetic Pertur-
bations for Augmenting Robot Trajectories via NeRF), that
improves behavior cloning for eye-in-hand visual grasping
policies. By leveraging view-synthesis methods like NeRF,
SPARTN extends the idea of corrective feedback augmen-
tation to the visual domain. The resulting approach can pro-
duce (i) reactive, (ii) real-time, and (iii) RGB-only policies
for 6-DoF grasping. The data augmentation is fully offline
and does not require additional effort from expert demon-
strators nor online environment interactions. We evaluate
SPARTN on 6-DoF robotic grasping tasks both in simula-
tion and in the real world. On a previously-proposed sim-
ulated 6-DoF grasping benchmark [58], the augmentation
from SPARTN improves grasp success rates by 2.8×com-
pared to training without SPARTN, and even outperforms
some methods that use expensive online supervision. On
eight challenging real-world grasping tasks with a Franka
Emika Panda robot, SPARTN improves the absolute aver-
age success rate by 22.5%.
|
Zhou_SparseFusion_Distilling_View-Conditioned_Diffusion_for_3D_Reconstruction_CVPR_2023 | Abstract
We propose SparseFusion, a sparse view 3D recon-
struction approach that unifies recent advances in neural
rendering and probabilistic image generation. Existing
approaches typically build on neural rendering with re-
projected features but fail to generate unseen regions or
handle uncertainty under large viewpoint changes. Alter-
nate methods treat this as a (probabilistic) 2D synthesis
task, and while they can generate plausible 2D images, they
do not infer a consistent underlying 3D. However, we find
that this trade-off between 3D consistency and probabilistic
image generation does not need to exist. In fact, we show
that geometric consistency and generative inference can be
complementary in a mode-seeking behavior. By distilling a
3D consistent scene representation from a view-conditioned
latent diffusion model, we are able to recover a plausible
3D representation whose renderings are both accurate and
realistic. We evaluate our approach across 51 categories
in the CO3D dataset and show that it outperforms exist-
ing methods, in both distortion and perception metrics, for
sparse-view novel view synthesis. | 1. Introduction
Consider the two images of the teddybear shown in Fig-
ure 1 and try to imagine the underlying 3D object. Relying
on the direct visual evidence in these images, you can easily
infer that the teddybear is white, has a large head, and has
small arms. Even more remarkably, you can imagine be-
yond the directly visible to estimate a complete 3D model
of this object e.g. forming a mental model of the teddy’s
face with (likely black) eyes even though these were not
observed. In this work, we build a computational approach
that can similarly predict 3D from just a few images – by
integrating visual measurements and priors via probabilis-
tic modeling and then seeking likely 3D modes.
A growing number of recent works have studied the re-
lated tasks of sparse-view 3D reconstruction and novel view
synthesis, i.e. inferring 3D representations and/or synthe-
sizing novel views of an object given just a few (typically
2-3) images with known relative camera poses. By lever-
aging data-driven priors, these approaches can learn to effi-
ciently leverage multi-view cues and infer 3D from sparse
views. However, they still yield blurry predictions under
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12588
large viewpoint changes and cannot hallucinate plausible
content in unobserved regions. This is because they do
not account for the uncertainty in the outputs e.g. the unob-
served nose of a teddybear may be either red or black, but
these methods, by reducing inference to independent pixel-
wise or point-wise predictions, cannot model such variation.
In this work, we propose to instead model the distribu-
tionover the possible images given observations from some
context views and an arbitrary query viewpoint. Leveraging
a geometrically-informed backbone that computes pixel-
aligned features in the query view, our approach learns a
(conditional) diffusion model that can then infer detailed
plausible novel-view images. While this probabilistic image
synthesis approach allows the generation of higher quality
image outputs, it does not directly yield a 3D representa-
tion of underlying the object. In fact, the (independently)
sampled outputs for each query view often do not even cor-
respond to a consistent underlying 3D e.g. if the nose of
the teddybear is unobserved in context views, one sampled
query view may paint it red, while another one black.
To obtain a consistent 3D representation, we propose a
Diffusion Distillation technique that ‘distills’ the predicted
distributions into an instance-specific 3D representation.
We note that the conditional diffusion model not only gives
us the ability to sample novel-view images but also to (ap-
proximately) compute the likelihood of a generated one.
Using this insight, we optimize an instance-specific (neural)
3D representation by maximizing the diffusion-based like-
lihood of its renderings. We show that this leads to a mode-
seeking optimization that results in more accurate and real-
istic renderings, while also recovering a 3D-consistent rep-
resentation of the underlying object. We demonstrate our
approach on over 50 real-world categories from the CO3D
dataset and show that our method allows recovering accu-
rate 3D and novel views given as few as 2 images as input
– please see Figure 1 for sample results.
|
Zhao_Learning_Anchor_Transformations_for_3D_Garment_Animation_CVPR_2023 | Abstract
This paper proposes an anchor -based deformation
model, namely AnchorDEF , to predict 3D garment anima-
tion from a body motion sequence. It deforms a garment
mesh template by a mixture of rigid transformations with
extra nonlinear displacements. A set of anchors around
the mesh surface is introduced to guide the learning of
rigid transformation matrices. Once the anchor transfor-
mations are found, per-vertex nonlinear displacements of
the garment template can be regressed in a canonical space,
which reduces the complexity of deformation space learn-
ing. By explicitly constraining the transformed anchors to
satisfy the consistencies of position, normal and direction,
the physical meaning of learned anchor transformations in
space is guaranteed for better generalization. Furthermore,
an adaptive anchor updating is proposed to optimize the
anchor position by being aware of local mesh topology for
learning representative anchor transformations. Qualita-
tive and quantitative experiments on different types of gar-
ments demonstrate that AnchorDEF achieves the state-of-
the-art performance on 3D garment deformation prediction
in motion, especially for loose-fitting garments.
| 1. Introduction
Animating 3D garments has a wide range of applications
in 3D content generation, digital humans, virtual try-on,
video games, and so on. Existing pipelines of 3D garment
animation usually rely on physics based simulation (PBS),
which requires a large amount of computational resources
and time costs, particularly for high-quality PBS methods.
Some data-driven or learning-based methods have been
proposed to quickly produce 3D garment deformation from
static poses or motion sequences with low computational
complexity [4,5,9,11,12,16,25–28,31,40]. However, many
of them attach garment templates to the skeleton of hu-
man body for modeling the articulation of garments, which
*Corresponding author.
Figure 1. 3D garment deformation predicted by the proposed An-
chorDEF with body motions. Leveraging the anchor transforma-
tions, AnchorDEF is able to realistically deform the garment mesh,
especially for loose-fitting garments, e.g., dresses.
only work with tight garments, e.g., T-shirts and pants, and
poorly address loose-fitting ones, e.g., dresses and skirts.
In this case, the topology of garments is different from the
human body. Therefore, using the skinning blend weights
of body yields discontinuities on deformed garment mesh.
Several methods [26, 34] smooth out the blend weights to
alleviate the discontinuities but may lose the shape details
for large deformations of loose-fitting garments, which do
not closely follow body movements.
To this end, we propose an anchor -based deformation
model, namely AnchorDEF , for predicting 3D garment de-
formation from a body motion sequence. It leverages a
mixture of rigid anchor transformations to represent prin-
ciple rotations and translations of garments to detach the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
491
garment articulation from the body skeleton while preserv-
ing the body movement prior, then nonlinear displacements
can be regressed relatively easily in a canonical space. As
shown in Fig. 1, our method can exploit anchor transforma-
tions to realistically deform the garment mesh, especially
for loose-fitting garments, e.g., dresses.
Specifically, given a sequence of body motions includ-
ing poses and translations, we first estimate rigid transfor-
mations of a set of anchors around the garment mesh. Us-
ing the linear blending skinning (LBS), the garment mesh
template is deformed by a weighted combination of the an-
chor transformations meanwhile per-vertex displacements
of the mesh template are regressed to correct artifacts of the
blended rigid transformations. To learn physically mean-
ingful anchor transformations for better generalization, we
enforce the transformed anchors to maintain consistency
with the target’s position and normal. In addition, a relative
direction constraint is employed to reduce garment-body in-
terpenetrations, which is efficient due to the sparseness of
anchors. To make the learned anchor transformations effec-
tively represent the garment deformation, an adaptive an-
chor updating is further introduced to utilize mesh simplifi-
cation as supervision to optimize the anchor position. It pa-
rameterizes the position with a local attention mask on ad-
jacent mesh vertices and pushes the anchors towards folds
and boundaries of garment mesh which usually determine
the way of deformation.
The main contributions of our work can be summarized
as follows: 1) We propose an anchor-based deformation
model which learns a set of anchor transformations and
blend weights in a unified framework to represent the de-
formation of 3D garments, especially for loose-fitting ones.
2) We propose to learn anchor transformations by position
and normal consistencies as well as relative direction con-
straint for better generalization and fewer garment-body in-
terpenetrations. 3) We introduce an adaptive anchor updat-
ing with the mesh simplification as supervision to optimize
the anchor position for learning representative anchor trans-
formations.
|
Zhu_OpenMix_Exploring_Outlier_Samples_for_Misclassification_Detection_CVPR_2023 | Abstract
Reliable confidence estimation for deep neural classi-
fiers is a challenging yet fundamental requirement in high-
stakes applications. Unfortunately, modern deep neural
networks are often overconfident for their erroneous predic-
tions. In this work, we exploit the easily available outlier
samples, i.e., unlabeled samples coming from non-target
classes, for helping detect misclassification errors. Partic-
ularly, we find that the well-known Outlier Exposure, which
is powerful in detecting out-of-distribution (OOD) samples
from unknown classes, does not provide any gain in identi-
fying misclassification errors. Based on these observations,
we propose a novel method called OpenMix, which incor-
porates open-world knowledge by learning to reject uncer-
tain pseudo-samples generated via outlier transformation.
OpenMix significantly improves confidence reliability un-
der various scenarios, establishing a strong and unified
framework for detecting both misclassified samples from
known classes and OOD samples from unknown classes.
The code is publicly available at https://github.
com/Impression2805/OpenMix .
| 1. Introduction
Human beings inevitably make mistakes, so do ma-
chine learning systems. Wrong predictions or decisions can
cause various problems and harms, from financial loss to
injury and death. Therefore, in risk-sensitive applications
such as clinical decision making [14] and autonomous driv-
ing [29, 63], it is important to provide reliable confidence
to avoid using wrong predictions, in particular for non-
specialists who may trust the computational models with-
out further checks. For instance, a disease diagnosis model
should hand over the input to human experts when the pre-
diction confidence is low. However, though deep neural net-
works (DNNs) have enabled breakthroughs in many fields,
they are known to be overconfident for their erroneous pre-
*Corresponding author.
class #1 class #2 counterexample misclassification
adult / suit child / casual dog / suit adult (low confid.)
Figure 1. Illustration of advantages of counterexample data for
reliable confidence estimation. The misclassified image has the
most determinative and shortcut [18] features from class #1 ( i.e.,
suit). Counterexample teaches the model the knowledge of what
is not adult even if it has suit , which could help reduce model’s
confidence on wrong predictions.
dictions [25, 62], i.e., assigning high confidence for ①mis-
classified samples from in-distribution (ID) and ②out-of-
distribution (OOD) samples from unknown classes.
In recent years, many efforts have been made to enhance
the OOD detection ability of DNNs [2, 13, 15, 23, 26, 39],
while little attention has been paid to detecting misclassi-
fied errors from known classes. Compared with the widely
studied OOD detection problem, misclassification detection
(MisD) is more challenging because DNNs are typically
more confident for the misclassified ID samples than that for
OOD data from a different distribution [19]. In this paper,
we focus on the under-explored MisD, and propose a sim-
ple approach to help decide whether a prediction is likely to
be misclassified, and therefore should be rejected.
Towards developing reliable models for detecting mis-
classification errors, we start by asking a natural question:
Why are human beings good at confidence estimation?
A crucial point is that humans learn and predict in con-
text, where we have abundant prior knowledge about other
entities in the open world. According to mental models
[11,31,54] in cognitive science, when assessing the validity
or evidence of a prediction, one would retrieve counterex-
amples, i.e., which satisfy the premise but cannot lead to
the conclusion. In other words, exploring counterexamples
from open world plays an important role in establishing re-
liable confidence for the reasoning problem. Inspired by
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12074
this, we attempt to equip DNNs with the above ability so
that they can reduce confidence for incorrect predictions.
Specifically, we propose to leverage outlier data, i.e., un-
labeled random samples from non-target classes, as coun-
terexamples for overconfidence mitigation. Fig. 1 presents
an intuitive example to illustrate the advantages of outlier
samples for reducing the confidence of misclassification.
To leverage outlier samples for MisD, we investigate the
well-known Outlier Exposure (OE) [26] as it is extremely
popular and can achieve state-of-the-art OOD detection per-
formance. However, we find that OE is more of a hin-
drance than a help for identifying misclassified errors. Fur-
ther comprehensive experiments show that existing popular
OOD detection methods can easily ruin the MisD perfor-
mance. This is undesirable as misclassified errors widely
exist in practice, and a model should be able to reliably
reject those samples rather than only reject OOD samples
from new classes. We observe that the primary reason for
the poor MisD performance of OE and other OOD meth-
ods is that: they often compress the confidence region of
ID samples in order to distinguish them from OOD sam-
ples. Therefore, it becomes difficult for the model to further
distinguish correct samples from misclassified ones.
We propose a learning to reject framework to leverage
outlier data. ①Firstly, unlike OE and its variants which
force the model to output a uniform distribution on all train-
ing classes for each outlier sample, we explicitly break the
closed-world classifier by adding a separate reject class for
outlier samples. ②To reduce the distribution gap between
ID and open-world outlier samples, we mix them via sim-
ple linear interpolation and assign soft labels for the mixed
samples. We call this method OpenMix . Intuitively, the pro-
posed OpenMix can introduce the prior knowledge about
what is uncertain and should be assigned low confidence .
We provide proper justifications and show that OpenMix
can significantly improve the MisD performance. We would
like to highlight that our approach is simple, agnostic to the
network architecture, and does not degrade accuracy when
improving confidence reliability.
In summary, our primary contributions are as follows:
• For the first time, we propose to explore the effective-
ness of outlier samples for detecting misclassification
errors. We find that OE and other OOD methods are
useless or harmful for MisD.
• We propose a simple yet effective method named
OpenMix, which can significantly improve MisD per-
formance with enlarged confidence separability be-
tween correct and misclassified samples.
• Extensive experiments demonstrate that OpenMix sig-
nificantly and consistently improves MisD. Besides, it
also yields strong OOD detection performance, serv-
ing as a unified failure detection method. |
Zhuo_Towards_Stable_Human_Pose_Estimation_via_Cross-View_Fusion_and_Foot_CVPR_2023 | Abstract
Towards stable human pose estimation from monocular
images, there remain two main dilemmas. On the one hand,
the different perspectives, i.e., front view, side view, and
top view, appear the inconsistent performances due to the
depth ambiguity. On the other hand, foot posture plays
a significant role in complicated human pose estimation,
i.e., dance and sports, and foot-ground interaction, but un-
fortunately, it is omitted in most general approaches and
datasets. In this paper, we first propose the Cross-View Fu-
sion (CVF) module to catch up with better 3D intermediate
representation and alleviate the view inconsistency based
on the vision transformer encoder. Then the optimization-
based method is introduced to reconstruct the foot pose and
foot-ground contact for the general multi-view datasets in-
cluding AIST++ and Human3.6M. Besides, the reversible
kinematic topology strategy is innovated to utilize the con-
tact information into the full-body with foot pose regressor.
Extensive experiments on the popular benchmarks demon-
strate that our method outperforms the state-of-the-art ap-
proaches by achieving 40.1mm PA-MPJPE on the 3DPW
test set and 43.8mm on the AIST++ test set.
| 1. Introduction
Estimating 3D poses from a monocular RGB camera is
significant in computer vision and artificial intelligence, as
it is fundamental in many applications, e.g. robotics, action
recognition, animation, human-object interaction, etc. Ben-
efiting from the dense representation of SMPL models [18],
SMPL-based methods [9–12] have recently dominated the
3D pose estimation and achieved state-of-the-art results.
Although these methods have considerably decreased the
reconstruction error, they still suffer from two main chal-
lenges in pose stability. Thus, in this paper, we focus on
SMPL-based 3D pose estimation and present a method for
reducing the instability in estimation.
* indicates the equal contributions.
(a) Inconsistent performance
from different perspectives.
(b) Inaccurate foot posture and
foot-ground interaction.
Figure 1. Two main challenges towards stable human pose estima-
tion.
The first challenge is the inconsistency performance of
poses from different perspectives. An example is shown
in Figure 1a that the front view projection of the 3D poses
predicted by the model can be well aligned with the picture,
but from its side view, the human poses are oblique. The
difficulty mainly stems from the fact that estimating 3D hu-
man poses requires a model to extract good 3D intermedi-
ate representation from monocular images, which is diffi-
cult due to the lack of depth input. The second challenge
is the stability of the foot posture. As shown in Figure 1b,
the estimated foot posture is inaccurate and does not match
the foot-ground contact. The main reason is that the con-
tact between the foot and the ground and the posture of foot
joints i.e. heels, foot toes, ankles, etc. are omitted in most
work.
In the literature, most SMPL-based methods directly ex-
tract the holistic features from the image and then feed
them to the subsequent regression networks to calculate the
SMPL parameters [9–12]. These holistic methods do not
explicitly model the pose-related 3D features. In addition,
it is also challenging to directly predict the SMPL param-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
650
Figure 2. The top-down framework for 3D human pose and shape estimation, which consists of three parts, including the vision transformer
encoder, the cross-view attention representation, and the reversible kinematic topology decoder.
eters from the holistic features due to the highly nonlinear
mapping [22]. Our first contribution is that we propose an
intermediate representation architecture called the Cross-
View Fusion module (CVF). It learns a fused 3D interme-
diate representation by supervision over three views: the
front, the side, and the top. Specifically, our method con-
sists of three branches. Each branch learns 2D poses and
features in its corresponding view. Predicting the 2D poses
in side-view and top-view from input images is challenging,
so we design an attention-based architecture that leverages
prior information from the front-view branch to facilitate
the training of side-view and bird-view branches. Thanks to
the better 3D intermediate representation, our method alle-
viates the view inconsistency and outperforms other SMPL-
based methods on 3D pose estimation.
Understanding the foot-ground contact and learning the
inherent dynamic dependencies among joints is the key
to solving the challenge of foot stability. However, most
datasets lack the annotation information of foot-ground
contact. For this reason, we propose a method based
on multi-view optimization to add foot-ground annota-
tions to some public datasets, i.e., Human3.6M [7] and
AIST++ [16]. Different from the previous optimization-
based methods ( e.g., SMPLify [1]), our method utilizes
multi-view images, which can deal with the severe joints
occlusion, and thus obtain better foot joint annotations and
foot-ground contact annotations. To the best of our knowl-
edge, our work is the first to perform unified foot-ground
contact annotations on multiple existing large-scale 3D pose
datasets. We believe these additional annotations will fur-
ther improve the human pose estimation task in the future.Inspired by [32], we further propose a Reversible Kinematic
Topology Decoder (RKTD) that can dynamically adjust the
predicted order of individual lower limb joints according to
the state of foot-ground contact.
Our method achieves state-of-the-art performance on
multiple 3D human pose estimation benchmarks. On the
3DPW [31] dataset, it achieves 2.7mm improvement com-
pared to the best art D&D [14]. Although our method
is trained on single-frame images, it does achieve bet-
ter results than existing video-based methods, such as
MAED [33] and D&D [14]. We annotated foot joint and
foot-ground contact on Human3.6M and AIST++ and then
trained our method on them. Our method reduced MPJPE
by 2mm and 3mm on Human3.6m and AIST++, respec-
tively.
In summary, we make the following four contributions:
• We design a 3D intermediate feature representation
module called Cross-View Fusion to extract the fea-
tures of the key points in the front, side, and bird’s
eye views. By doing this, our method achieves more
consistent performances in different perspectives than
other state-of-the-art methods.
• We design an optimization-based scheme to recon-
struct the foot poses and annotate foot-ground con-
tacts for the commonly-used multi-view datasets, in-
cluding AIST++ and Human3.6M. These new annota-
tions can be used to improve pose stability during the
foot-ground interaction in future work.
• We propose a Reversible Kinematic Topology De-
coder(RKTD) that utilizes the foot-ground contact in-
651
formation to dynamically adapt the prediction order of
the joints on the leg limb chain. This strategy improves
the accuracy of pose estimation when there is a foot
touchdown.
• We conduct extensive experiments on the commonly-
used benchmarks, including 3DPW, Human3.6M, and
AIST++. Compared to other existing methods, our
method achieves state-of-the-art performance quanti-
tatively. The qualitative comparison shows that our
method estimates more stable poses, i.e., the perfor-
mances are more consistent under different views with
more accurate foot-ground contacts.
|
Zhmoginov_Decentralized_Learning_With_Multi-Headed_Distillation_CVPR_2023 | Abstract
Decentralized learning with private data is a cen-
tral problem in machine learning. We propose a novel
distillation-based decentralized learning technique that al-
lows multiple agents with private non-iid data to learn from
each other, without having to share their data, weights or
weight updates. Our approach is communication efficient,
utilizes an unlabeled public dataset and uses multiple aux-
iliary heads for each client, greatly improving training ef-
ficiency in the case of heterogeneous data. This approach
allows individual models to preserve and enhance perfor-
mance on their private tasks while also dramatically im-
proving their performance on the global aggregated data
distribution. We study the effects of data and model archi-
tecture heterogeneity and the impact of the underlying com-
munication graph topology on learning efficiency and show
that our agents can significantly improve their performance
compared to learning in isolation.
| 1. Introduction
Supervised training of large models historically relied on
access to massive amounts of labeled data. Unfortunately,
since data collection and labeling are very time-consuming,
curating new high-quality datasets remains expensive and
practitioners are frequently forced to get by with a limited
set of available labeled datasets. Recently it has been pro-
posed to circumvent this issue by utilizing the existence of
large amounts of siloed private information. Algorithms ca-
pable of training models on the entire available data with-
out having a direct access to private information have been
developed with Federated Learning approaches [24] taking
the leading role.
While very effective in large-scale distributed environ-
ments, more canonical techniques based on federated av-
eraging, have several noticeable drawbacks. First, gradi-
ent aggregation requires individual models to have fully
compatible weight spaces and thus identical architectures.
While this condition may not be difficult to satisfy for suf-
Client 1 Client 2 Client 3"Public" Dataset
Distillation
on "Public" DSDistillation
on "Public" DSFigure 1. Conceptual diagram of a distillation in a distributed
system. Clients use a public dataset to distill knowledge from
other clients, each having their primary private dataset. Individ-
ual clients may have different architectures and different objective
functions.
ficiently small models trained across devices with compat-
ible hardware limitations, this restriction may be disadvan-
tageous in a more general setting, where some participant
hardware can be significantly more powerful than the oth-
ers. Secondly, federated averaging methods are generally
trained in a centralized fashion. Among other things, this
prohibits the use of complex distributed communication
patterns and implies that different groups of clients cannot
generally be trained in isolation from each other for pro-
longed periods of time.
Another branch of learning methods suitable for dis-
tributed model training on private data are those based on
distillation [3, 6, 15]. Instead of synchronizing the inner
states of the models, such methods use outputs or intermedi-
ate representations of the models to exchange the informa-
tion. The source of data for computing exchanged model
predictions is generally assumed to be provided in the form
of publicly available datasets [12] that do not have to be an-
notated since the source of annotation can come from other
models in the ensemble (see Figure 1). One interesting in-
terpretation of model distillation is to view it as a way of us-
ing queries from the public dataset to indirectly gather infor-
mation about the weights of the network (see Appendix A).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8053
Unlike canonical federated-based techniques, where the en-
tire model state update is communicated, distillation only
reveals activations on specific samples, thus potentially re-
ducing the amount of communicated bits of information. By
the data processing inequality, such reduction, also trans-
lates into additional insulation of the private data used to
train the model from adversaries. However, it is worth not-
ing that there exists multiple secure aggregation protocols
including SecAgg [5] that provide data privacy guarantees
for different Federated Learning techniques.
The family of approaches based on distillation is less
restrictive than canonical federated-based approaches with
respect to the communication pattern, supporting fully dis-
tributed knowledge exchange. It also permits different mod-
els to have entirely different architectures as long as their
outputs or representations are compatible with each other. It
even allows different models to use various data modalities
and be optimizing different objectives, for example mixing
supervised and self-supervised tasks within the same do-
main. Finally, notice that the distillation approaches can
and frequently are used in conjunction with weight aggrega-
tion [21, 30, 31, 37], where some of the participating clients
may in fact be entire ensemble of models with identical ar-
chitectures continuously synchronized using federated ag-
gregation (see Figure 8 in Supplementary).
Our contributions. In this paper, we propose and em-
pirically study a novel distillation-based technique that we
call Multi-Headed Distillation (MHD) for distributed learn-
ing on a large-scale ImageNet [9] dataset. Our approach
is based on two ideas: (a) inspired by self-distillation
[2,10,38] we utilize multiple model heads distilling to each
other (see Figure 2) and (b) during training we simultane-
ously distill client model predictions and intermediate net-
work embeddings to those of a target model. These tech-
niques allow individual clients to effectively absorb more
knowledge from other participants, achieving a much higher
accuracy on a set of all available client tasks compared with
the naive distillation method.
In our experiments, we explore several key properties
of the proposed model including those that are specific to
decentralized distillation-based techniques. First, we anal-
yse the effects of data heterogeneity, studying two scenar-
ios in which individual client tasks are either identical or
very dissimilar. We then investigate the effects of work-
ing with nontrivial communication graphs and using het-
erogeneous model architectures. Studying complex com-
munication patterns, we discover that even if two clients
in the ensemble cannot communicate directly, they can still
learn from each other via a chain of interconnected clients.
This “transitive” property relies in large part on utilization
of multiple auxiliary heads in our method. We also con-
duct experiments with multi-client systems consisting of
both ResNet-18 and ResNet-34 models [14] and demon-strate that: (a) smaller models benefit from having large
models in the ensemble, (b) large models learning from a
collection of small models can reach higher accuracies than
those achievable with small models only.
|
Zhu_ScaleKD_Distilling_Scale-Aware_Knowledge_in_Small_Object_Detector_CVPR_2023 | Abstract
Despite the prominent success of general object detec-
tion, the performance and efficiency of Small Object Detec-
tion (SOD) are still unsatisfactory. Unlike existing works
that struggle to balance the trade-off between inference
speed and SOD performance, in this paper, we propose
a novel Scale-aware Knowledge Distillation (ScaleKD),
which transfers knowledge of a complex teacher model to
a compact student model. We design two novel modules to
boost the quality of knowledge transfer in distillation for
SOD: 1) a scale-decoupled feature distillation module that
disentangled teacher’s feature representation into multi-
scale embedding that enables explicit feature mimicking of
the student model on small objects. 2) a cross-scale assis-
tant to refine the noisy and uninformative bounding boxes
prediction student models, which can mislead the student
model and impair the efficacy of knowledge distillation. A
multi-scale cross-attention layer is established to capture
the multi-scale semantic information to improve the student
model. We conduct experiments on COCO and VisDrone
datasets with diverse types of models, i.e., two-stage and
one-stage detectors, to evaluate our proposed method. Our
ScaleKD achieves superior performance on general detec-
tion performance and obtains spectacular improvement re-
garding the SOD performance.
| 1. Introduction
Object detection is a fundamental task that has been
developed over the past twenty-year in the computer vi-
sion community. Despite the state-of-the-art performance
for general object detection having been conspicuously
improved since the rise of deep learning, balancing the
complexity-precision for small object detection is still an
open question. Current works strive to refine feature fu-
sion modules [9,21], devise novel training schemes [32,33]
*Corresponding authorto explicitly train on small objects, design new neural archi-
tectures [20,39] to better extract small objects’ features, and
leverage increased input resolution to enhance representa-
tion quality [1, 49]. However, these approaches struggle to
balance detection quality on small objects with computa-
tional costs at the inference stage.
The above reasons incentivize us to design a cost-free
technique at test time to improve SOD performance. In
the spirit of the eminent success of knowledge distillation
(KD) on image data [14], we explore distillation methods
for SOD. Typically, knowledge distillation opts for a com-
plex, high-performance model (teacher) that transfers its
knowledge to a compact, low-performance model (student).
The student model can harness instructive information to
enhance its representation learning ability. Nevertheless,
unlocking this potential in SOD involves overcoming two
challenges: 1) SOD usually suffers from noisy feature rep-
resentations. Due to the nature of small objects, which gen-
erally take over a small region in the whole image, the fea-
ture representations of these small objects can be contami-
nated by the background and other instances with relatively
larger sizes. 2) Object detectors have a low tolerance for
noisy bounding boxes on small objects. It is inevitable that
teacher models make incorrect predictions. Usually, student
models can extract informative dark knowledge [14, 28]
from imperfect predictions from the teacher. However, in
SOD, small perturbations on the teacher’s bounding box can
dramatically impair SOD performance on the student detec-
tor (§3.2).
To this end, we propose Scale-aware Knowledge Distil-
lation for small object detection (ScaleKD). Our proposed
ScaleKD consists of two modules, a Scale-Decoupled Fea-
ture (SDF) distillation module and a Cross-Scale Assistant
(CSA), to address the aforementioned two challenges corre-
spondingly. The SDF is inspired by the crucial shortcoming
of existing feature distillation methods, where the feature
representations of objects with varying scales are coupled
in a single embedding. It poses difficulty for the student
to mimic small objects’ features from the teacher model.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19723
TeacherNetwork
Student Network
Teacher Network
Student NetworkOutput Feature DistillationIntermediate Feature Distillation
Scale-Decouple BranchCross-Scale AssistantLearnable ModuleInference Path
𝐿2 𝐿𝑜𝑠𝑠Scale-Decoupled Featurehigh-resolutionimage
standard-resolutionimageCross-Scale AssistantDistillationFPN
ground-truthSupervision
SupervisionSupervisionFigure 1. The overview of Scale-aware Knowledge Distillation. It consists of a Scale-Decoupled Feature distillation module and a Cross-
Scale Assistant module to improve small object detection.
As a result, the proposed SDF aims to decouple a single-
scale feature embedding into a multi-scale feature embed-
ding. The multi-scale embedding is obtained by a paral-
lel multi-branch convolutional block, where each branch
deals with one scale. Our SDF allows the student model
to better understand the feature knowledge from the per-
spective of object scale. Furthermore, we propose a learn-
able CSA to resolve the adverse effect of teachers’ noisy
bounding box prediction on small objects. The CSA com-
prises a multi-scale cross-attention module, where represen-
tations from the teacher and student models are mapped into
a single feature embedding. The multi-scale query-key pair
projects the teacher’s features into multiple sizes, such that
the fine-grained and low-level details can be preserved in
CSA, which helps to produce suitable bounding box super-
vision for the student model.
We demonstrate the effectiveness of our approach on
COCO object detection and VisDrone datasets. The ex-
periments are conducted on multiple types of detectors, in-
cluding two-stage detectors, anchor-based detectors, and
anchor-free detectors, and have proven the generalizability
of our approach. Our work offers a practical approach for
industrial application on SOD as well as introduces a new
perspective on designing scale-aware KD modules to im-
prove object detectors. We further extend our method on
instance-level detection tasks, such as instance segmenta-
tion and keypoint detection, demonstrating the superiority
of our approach to dealing with small objects in vision tasks.
In summary, our contributions are the following:
• We propose Scale-Aware Knowledge Distillation
(ScaleKD), a novel knowledge distillation framework
to improve general detection and SOD performance
without bringing extra computational costs at test time.• Our proposed ScaleKD not only exceeds state-of-the-
art KD for object detection methods on general de-
tection performance but also surpasses existing ap-
proaches on SOD by a large margin. Extended experi-
ments on instance segmentation and keypoint detection
further strength our method.
|
Zhou_Class-Conditional_Sharpness-Aware_Minimization_for_Deep_Long-Tailed_Recognition_CVPR_2023 | Abstract
It’s widely acknowledged that deep learning models with
flatter minima in its loss landscape tend to generalize bet-
ter . However , such property is under-explored in deep
long-tailed recognition (DLTR), a practical problem wherethe model is required to generalize equally well across all
classes when trained on highly imbalanced label distribu-tion. In this paper , through empirical observations, we ar-gue that sharp minima are in fact prevalent in deep long-
tailed models, whereas na ¨ıve integration of existing flatten-
ing operations into long-tailed learning algorithms brings
little improvement. Instead, we propose an effective two-
stage sharpness-aware optimization approach based on the
decoupling paradigm in DLTR. In the first st age, both the
feature extractor and classifier are trained under param-eter perturbations at a class-conditioned scale, which istheoretically motivated by the characteristic radius of flatminima under the PAC-Bayesian framework. In the sec-ond st age, we g enerate adversarial features with class-
balanced sampling to further robustify the classifier with thebackbone frozen. Extensive experiments on multiple long-tailed visual recognition benchmarks show that, our pro-posed Class- Conditional Sharpness- Aware Minimization
(CC-SAM), achieves competitive performance compared tothe state-of-the-arts. Code is available at https://
github.com/zzpustc/CC-SAM .
| 1. Introduction
Modern deep learning models, composed of multiple
neural network layers with millions of parameters, haveachieved remarkable successes in computer vision [ 24,33,
†Equal contribution. ⋆Corresponding authors. Work was primarily
done when Z. Zhou worked as L. Li’s intern at Tencent AI Lab, Shenzhen.41,48]. A key enabler of deep learning is the collection
of large-scale datasets [ 29,42,64], which are normally split
into training and testing sets with presumably i.i.d. sam-
ples. However, such scenario provides relatively trivial tests
for the generalization of machine learning models. In prac-
tice, label [ 17,25,56] and domain [ 14,18,23] distribution
shifts are prevalent, due to the disparity between the datapreparation and evaluation protocols. A classical exampleis imbalanced [ 15] or long-tailed recognition [ 60], where
a model is trained on highly imbalanced source label dis-
tribution p
s(y)while evaluated on a uniform target label
distribution pt(y).
In this paper, we focus on the practical yet challeng-
ing deep long-tailed recognition (DLTR) problem, which
is inherent in the visual world [ 32,60] with fundamen-
tal connections to many disciplines such as the power-law scaling in network science [ 2] and the Pareto prin-
ciple in economics [ 39]. In computer vision, numerous
deep long-tailed learning studies have emerged in recentyears, which mainly belong to 5 categories: class re-balancing [ 7,10,28,40,45,52,54], information augmenta-
tion [ 22,27,31,50,55], decoupled training [ 20,58,62], rep-
resentation learning [ 9,32,51,59,65] and ensemble learn-
ing [ 6,53,63].
In this work, we propose a novel approach to DLTR
from a distinct angle , by seeking out flat minima in the loss
landscape of modern neural networks to ensure model ro-
bustness under parameter perturbation. Such optimizationstrategy, termed flattening in our context, have been shown
in a myriad of literature to effectively improve generaliza-
tion of deep learning models in terms of supervised learn-
ing [ 13,21,35,43], self-supervised learning [ 30] and con-
tinual learning [ 11,44]. However, application and adap-
tion of flattening in the context of DLTR remain under-
explored. To fill this gap, we first show later in this paper
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3499
(Section 2.2.2 ) that existing flattening methods are ineffec-
tive for long-tailed learning, consistent with the observation
from a very recent paper [ 49], due to severe label distribu-
tion shifts. Accordingly, we present a new efficient variantof the sharpness-aware minimization (SAM) [ 13] technique
based on the Decoupling paradigm [ 20] of DLTR, which
leverages the invariance of the class conditional distributionbetween the source and the target domain. In a nutshell, ourcontributions are three-fold: |
Zhao_DNeRV_Modeling_Inherent_Dynamics_via_Difference_Neural_Representation_for_Videos_CVPR_2023 | Abstract
Existing implicit neural representation (INR) methods
do not fully exploit spatiotemporal redundancies in videos.
Index-based INRs ignore the content-specic spatial fea-
tures and hybrid INRs ignore the contextual dependency on
adjacent frames, leading to poor modeling capability for
scenes with large motion or dynamics. We analyze this lim-
itation from the perspective of functiontting and reveal
the importance of frame difference. To use explicit motion
information, we propose Difference Neural Representation
for Videos (DNeRV), which consists of two streams for con-
tent and frame difference. We also introduce a collabora-
tive content unit for effective feature fusion. We test DNeRV
for video compression, inpainting, and interpolation. D-
NeRV achieves competitive results against the state-of-the-
art neural compression approaches and outperforms exist-
ing implicit methods on downstream inpainting and inter-
polation for960×1920videos.
1Corresponding author: Zhan Ma ( [email protected]) | 1. Introduction
In recent years, implicit neural representations (INR)
have gained signicant attention due to their strong abili-
ty in learning a coordinate-wise mapping of different func-
tions. The main principle behind INR is to learn an im-
plicit continuous mappingfusing a learnable neural net-
workg θ(·) :Rm→Rn. The idea wasrst proposed for
the neural radianceelds (NeRF) [ 28] and since then has
been applied to various applications [ 4,8,58]. INR attempt-
s to approximate the continuousfby trainingg θwithm-
dimensional discretecoordinatesxRmand correspond-
ing quantity of interestyRn. Once trained, the desired
fcan be fully characterized usingg θor the weightsθ, and
it would be benet for the tasks which need to model the
intrinsic generalization for given data, such as interpolation
or inpainting tasks shown in Fig. 1.
The success of INR can be attributed to the insight that
a learnable and powerful operator with anite set of da-
ta samplesS={x i, yi}N
i=0, cant theunknownmapping
f. The accuracy of the mapping depends on the number
of samples(N)and the complexity of the mapf. INR for
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2031
Figure 2. Examples of neighboring frames with large mismatch.
Learning continuous INR with such dynamics is challenging.
videos requires a largeN, which primarily depends on the
size and internal complexity of the video sequence. Further-
more, video representation is complicated due to different
sampling or frames-per-second (FPS) rates of videos. Large
motion (in terms of direction, speed, rotation, or blur) and
transformations of the objects or scene can make adjacen-
t frames quite different. Figure 2shows examples of such
mismatch between consecutive frames, which we attribute
toadjacent dynamics.
Adjacent dynamics are the short-term transformation-
s in the spatial structure, which are difcult to represent
using existing methods for neural representation of videos
(NeRV). Existing NeRV approaches can be broadly divid-
ed into two groups: (1)Index-basedmethods, such as [ 4]
and [ 21], use positional embedding of the index as input
and lack content-specic information for given videos. (2)
Hybrid-basedmethods [ 3] use frames for index embed-
ding and neglect the temporal correlation between different
frames. Therefore, neither index nor frame-based NeRV are
effective against adjacent dynamics.
In this work, we propose Difference NeRV (DNeRV)
that attempts to approximate adynamical systemby absorb-
ing the difference of adjacent frames,yD
t=y t−y t−1and
yD
t+1=y t+1−y t, as a diff stream input. Further anal-
ysis for the importance of diff stream is presented in Sec-
tion3. An illustration of DNeRV pipeline is presented in
Figure 3. Diff encoder captures short-term contextual cor-
relation in the diff stream, which is then merged with the
content stream for spatiotemporal feature fusion. In addi-
tion, we propose a novel gated mechanism, collaborative
content unit (CCU), which integrates spatial features in the
content stream and temporal features in the diff stream to
obtain accurate reconstruction for those frames with adja-
cent dynamics.
The main contribution of this paper are as follows.
•Existing NeRV methods cannot model content-specic
features and contextual correlations simultaneously. We
offer an explanation using adjacent dynamics. Further-
more, we reveal the importance of diff stream through
heuristic analysis and experiments.
•We propose the Difference NeRV, which can model the
content-specic spatial features with short-term temporal
dependence more effectively and help networkt the im-plicit mapping efciently. We also propose a collabora-
tive content unit to merge the features from two streams
adaptively.
•We present experiments on three datasets (Bunny, UVG,
and Davis Dynamic) and various downstream tasks to
demonstrate the effectiveness of the proposed method.
The superior performance over all other implicit methods
shows the efcacy of modeling videos with large motion.
As a result, DNeRV can be regarded as a new baseline for
INR-based video representation.
|
Zhu_Conditional_Text_Image_Generation_With_Diffusion_Models_CVPR_2023 | Abstract
Current text recognition systems, including those for
handwritten scripts and scene text, have relied heavily on
image synthesis and augmentation, since it is difficult to re-
alize real-world complexity and diversity through collect-
ing and annotating enough real text images. In this paper,
we explore the problem of text image generation, by taking
advantage of the powerful abilities of Diffusion Models in
generating photo-realistic and diverse image samples with
given conditions, and propose a method called Conditional
TextImageGeneration with Diffusion Models (CTIG-DM
for short). To conform to the characteristics of text im-
ages, we devise three conditions: image condition, text con-
dition, and style condition, which can be used to control
the attributes, contents, and styles of the samples in the im-
age generation process. Specifically, four text image gen-
eration modes, namely: (1) synthesis mode, (2) augmen-
tation mode, (3) recovery mode, and (4) imitation mode,
can be derived by combining and configuring these three
conditions. Extensive experiments on both handwritten and
scene text demonstrate that the proposed CTIG-DM is able
to produce image samples that simulate real-world com-
plexity and diversity, and thus can boost the performance
of existing text recognizers. Besides, CTIG-DM shows its
appealing potential in domain adaptation and generating
images containing Out-Of-Vocabulary (OOV) words.
| 1. Introduction
Text recognition has been an important research topic
in the computer vision community for a long time, due
to its wide range of applications. In the past few years,
numerous recognition methods for scene and handwritten
text [3, 16, 43, 57, 58, 65, 69, 72] have been proposed, which
have substantially improved the recognition accuracy on
various benchmarks. The volume and diversity of data are
crucial for high recognition performance, but it is extremely
†Corresponding author.
StylesReal and generated handwritten text images
Figure 1. Handwritten text image samples from IAM [46] or gen-
erated by our proposed CTIG-DM. On the left, the handwriting
styles of the same word “ and” written by different writers are con-
siderably different, indicating the diversity of handwritten text and
the challenge of handwritten text recognition. On the right, two
images out of four in each row are written by the corresponding
writer on the left. Can you distinguish them from the generated
samples? (The answer will be revealed in the next page.)
(1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)
Figure 2. Scene text image samples from Real-L [5] or produced
by CTIG-DM. Only seven images herein are real. Can you identify
them? (The answer will be revealed in the next page.)
hard, if not impossible, to collect and label sufficient real
text images, so majority of the existing recognition meth-
ods rely heavily on data synthesis and augmentation.
Previously, a variety of data synthesis and augmentation
methods [7, 17, 18, 20, 25, 33, 38, 44, 45, 67] have been pro-
posed to enrich data for training stronger text recognition
models. In this paper, we investigate a technique that is
highly related and complementary to such works. Draw-
ing inspiration from the recent progress of Diffusion Mod-
els [15, 48], we propose a text image generation model,
which is able to conduct data synthesis, and thus can boost
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14235
the performance of existing text recognizers.
A recent study [15] has shown that State-Of-The-
Art (SOTA) likelihood-based models [48] can outperform
GAN-based methods [8, 30, 68] in generating images. Dif-
fusion models [23,48,59] have been becoming increasingly
popular, due to their powerful generative ability in various
vision tasks [2, 10, 11, 41, 55]. A typical representative of
diffusion models is Denoising Diffusion Probabilistic Mod-
els (DDPM) [23]. It generates diverse samples through dif-
ferent initial states of simple distribution and each transi-
tion. This means that it is challenging for DDPM to con-
trol the content of the output image due to the randomness
of the initial states and transitions. Guided-Diffusion [15]
provides conditions to diffusion models by adding clas-
sifier guidance. UnCLIP [53] further pre-trains a CLIP
model [52] to match the image and whole text, which are
used as the conditions for the diffusion models in image
generation. While these approaches have focused on nat-
ural images, images with handwritten or scene text have
their unique characteristics (as shown in Fig. 1 and Fig. 2),
which require not only image fidelity and diversity , but also
content validity of the generated samples, i.e., the text con-
tained in the images should be the same as specified in the
given conditions.
In this paper, we present a diffusion model based condi-
tional text image generator, termed Conditional TextImage
Generation with Diffusion Models (CTIG-DM for short).
To the best of our knowledge, this is one of the first works to
introduce diffusion models into the area of text image gen-
eration. The proposed CTIG-DM consists of a conditional
encoder and a conditional diffusion model. Specifically, the
conditional encoder generates three conditions, i.e., image
condition, text condition, and style condition (the writing
style of a specific writer). These conditions are proved to
be critical for the fidelity and diversity of the generated text
images. The conditional diffusion part uses these condi-
tions to generate images from random Gaussian noise. As
can be seen in Fig. 1 and Fig. 2, the quality of the images
generated by CTIG-DM is quite high that one can hardly
tell them from real images *. By combining the given con-
ditions, four image generation modes can be derived, i.e.,
synthesis mode, augmentation mode, recovery mode, and
imitation mode. With these modes, various text images that
can be used to effectively boost the accuracy of existing text
recognizers (see Sec. 4 for more details) could be produced.
Moreover, CTIG-DM shows its potential in handling OOV
image generation and domain adaptation.
The contributions can be summarized as follows:
• We propose a text image generation method based on
diffusion models, which is one of the first attempts to
use diffusion models to generate text images.
*In Fig. 1, the real images are the first and last of each row. In Fig. 2,
the real images are even numbered.• We devise three conditions and four image generation
modes, which can facilitate the generation of text im-
ages with high validity, fidelity, and diversity.
• Experiments on both scene text and handwritten text
demonstrate that CTIG-DM can significantly improve
both the image quality and the performance of previ-
ous text recognizers. Besides, CTIG-DM is effective
in OOV image generation and domain adaptation.
|
Zhu_EXCALIBUR_Encouraging_and_Evaluating_Embodied_Exploration_CVPR_2023 | Abstract
Experience precedes understanding. Humans constantly
explore and learn about their environment out of curiosity,
gather information, and update their models of the world.
On the other hand, machines are either trained to learn pas-
sively from static and fixed datasets, or taught to complete
specific goal-conditioned tasks. To encourage the develop-
ment of exploratory interactive agents, we present the EX-
CALIBUR benchmark. EXCALIBUR allows agents to ex-
plore their environment for long durations and then querytheir understanding of the physical world via inquiries like:
“is the small heavy red bowl made from glass?” or “is
there a silver spoon heavier than the egg?”. This designencourages agents to perform free-form home explorationwithout myopia induced by goal conditioning. Once theagents have answered a series of questions, they can renter
the scene to refine their knowledge, update their beliefs,
and improve their performance on the questions. Our ex-
periments demonstrate the challenges posed by this datasetfor the present-day state-of-the-art embodied systems and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14931
the headroom afforded to develop new innovative methods.
Finally, we present a virtual reality interface that enableshumans to seamlessly interact within the simulated world
and use it to gather human performance measures. EXCAL-
IBUR affords unique challenges in comparison to present-
day benchmarks and represents the next frontier for embod-
ied AI research.
| 1. Introduction
Humans are active learners, acquiring knowledge of the
physical world through intentional experiments with theirbodies and senses. Children as young as a few months old
learn about objects and their environment through observa-
tion and interaction [6, 24]. This sensorimotor experience,
as pointed out by Piaget [47], is critical in forming a funda-
mental understanding of reality. This is the cognitive moti-vation for the creation of EXCALIBUR .
In contrast, machine learning models typically obtain
knowledge by passively observing web-crawled, encyclo-pedic, or crowd-sourced static datasets [67]. This pas-
siveapproach has clear limitations. For instance, ground-
ing physical concepts like heavy, large, and long requires
moving beyond passive observation. To weigh an object,
humans will often try to use different forces to move it.
To compare the sizes of objects, they move around andperceive the objects from different angles and distances.Although large pre-trained models have made progress inaligning with the grounded world [41, 45], they still lack an
embodied understanding of physical concepts [59].
Todays popular active, embodied-learning benchmarks
in the Embodied AI community focus on directed task com-pletion. These include navigating to specified GPS coor-
dinates [3], locating an object of a specified category [7],
translating commands into low-level actions [5, 56], and in-
specting a scene to answer a question about the presence orcount of an object category [15, 25]. A more recent bench-
mark, Room Rearrangement [62] requires agents to explorethe scene, but the focus there is on navigation, observa-
tion, and memorization. Progress on these benchmarks has
been promising. We can now train agents that can compre-
hend goal instructions reasonably well and complete simpletasks, particularly navigation heavy tasks. None of thesebenchmarks, however, explicitly probe how these modelshave learned to represent their environments, nor do they
encourage the type of free-form, undirected, experimental,
exploration performed by humans.
To encourage and evaluate the capacity of embodied
agents to openly explore their environment and interact with
objects within it, we present the EXCALIBUR
1benchmark.
EXCALIBUR is built using large procedurally generated
1Exploratory Curious Agents with Language Induced Embodied World
Understandinghouses via ProcTHOR [18]. Each episode in EXCALIBUR
consists of four phases as shown in Fig. 1. Phase I Explo-
ration – The agent must navigate to and interact with objects
in the environment. Importantly, the agent isn’t seeded with
a goal and must instead perform open-ended exploration.
Interacting with objects takes place via physics-enabled arm
manipulation. Phase II Question Answering – We probe theagent’s understanding of the physical world through natural
language inquiries. Our questions go beyond simple prim-
itive queries, e.g. regarding object existence, and include
physical attributes (e.g. mas |
Zhu_TopNet_Transformer-Based_Object_Placement_Network_for_Image_Compositing_CVPR_2023 | Abstract
We investigate the problem of automatically placing an
object into a background image for image compositing.
Given a background image and a segmented object, the goal
is to train a model to predict plausible placements (loca-
tion and scale) of the object for compositing. The quality of
the composite image highly depends on the predicted loca-
tion/scale. Existing works either generate candidate bound-
ing boxes or apply sliding-window search using global rep-
resentations from background and object images, which fail
to model local information in background images. How-
ever, local clues in background images are important to de-
termine the compatibility of placing the objects with certain
locations/scales. In this paper, we propose to learn the cor-
relation between object features and all local background
features with a transformer module so that detailed infor-
mation can be provided on all possible location/scale con-
figurations. A sparse contrastive loss is further proposed
to train our model with sparse supervision. Our new for-
mulation generates a 3D heatmap indicating the plausibil-
ity of all location/scale combinations in one network for-
ward pass, which is >10×faster than the previous sliding-
window method. It also supports interactive search when
users provide a pre-defined location or scale. The pro-
posed method can be trained with explicit annotation or in
a self-supervised manner using an off-the-shelf inpainting
model, and it outperforms state-of-the-art methods signifi-
cantly. User study shows that the trained model generalizes
well to real-world images with diverse challenging scenes
and object categories.
| 1. Introduction
Object compositing [15, 25] is a common and important
workflow for image editing and creation. The goal is to
insert an object from an image into a given background im-
age such that the resulting image appears visually pleasing
and realistic. Conventional workflows in object composit-
ing rely on manual object placement, i.e. manually deter-
mining where the object should be placed (location) andin what size the object is placed (scale). However, man-
ual placement does not fulfill the growing need for image
creation for social sharing, advertising, education, etc., and
AI-assisted compositing with automatic object placement
is more desirable for future image creation applications.
While there have been several works on learning-based ob-
ject placement for specific scenes, general object placement
with diverse scenes and objects still remains challenging
with limited exploration, as it involves a deeper understand-
ing of common sense, objects, and local details of scenes.
Inaccurate object placement could lead to poor compositing
results, e.g. a person floating in the sky, a dog larger than
buildings, etc.
Existing works [7,10,12,26,29] formulate the problem in
very different ways, as shown in Fig. 1. [10,26] directly pre-
dict multiple transformations or bounding boxes indicating
the location and scale of the given objects. Such sparse pre-
dictions recommend the top candidate placements for users,
but they do not provide any information about other possi-
ble locations and scales. They also fail to leverage the lo-
cal clues in background images, as the bounding boxes are
generated based on only global features. Another thread of
works [12, 24] considers object placement as binary clas-
sification, which evaluates the plausibility of input images
and placement of bounding boxes instead of generating can-
didate placements directly from input images. One recent
work [29] utilizes a retrieval model to assess the plausibility
of a given placement and evaluates a grid of locations and
scales in a sliding-window manner. However, it requires
multiple network forward passes to generate dense evalua-
tion for one image, resulting in a slow inference speed.
In this paper, we propose TopNet, a Transformer-based
Object Placement Network for real-world object composit-
ing applications. Different from previous works, TopNet
formulates object placement as a dense prediction prob-
lem: generating evaluation for a dense grid of locations
and scales in one network forward pass. Given a back-
ground image and an object, TopNet directly generates a
3D heatmap indicating the plausibility score of object lo-
cation and scale, which is >10×faster than the previous
sliding-window method [29]. Previous works [26,29] com-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1838
Sliding -Window [2 9]3D Heatmap
Direct Prediction [2 6]…
…Background
Local FeatureObject Global
Feature
Background
Encoder
Model(left, top, width, height)
…0.8 0.3 0.9Argmax
Dense Prediction (Ours)…ModelArgmax
Object
EncoderBackground
Global Feature
Cosine
Similarity
(Sparse Evaluation , Fast) (Dense Evaluation , Slow) (Dense Evaluation , Fast)Background Object Background Background Background Object Background ObjectFigure 1. Comparison between different formulations for object placement. Our method provides a dense evaluation of possible loca-
tions/scales in one network forward pass.
bine background and foreground object features only at the
global level, which fails to capture local clues for determin-
ing the object location. We propose to learn the correlation
between global foreground object feature and local back-
ground features with a multi-layer transformer, leading to a
more efficient and accurate evaluation of all possible place-
ments. To train TopNet with sparse supervision where only
one ground-truth placement bounding box is provided, we
propose a sparse contrastive loss to encourage the ground-
truth location/scale combination to have a relatively high
score, while only minimizing the other combinations with
the lowest score or a score higher than the ground-truth with
a certain margin, thus preventing large penalty on other rea-
sonable locations/scales. Once the 3D heatmap is predicted,
top candidate placement bounding boxes can be generated
by searching the local maximum in the 3D heatmap. The
3D heatmap also provides guidance for other possible loca-
tions/scales which are not the best candidate. Experiments
on a large-scale inpainted dataset (Pixabay [1]) and anno-
tated dataset (OPA [12]) show the superiority of our ap-
proach over previous methods. Our contributions are sum-
marized as follows:
• A novel transformer-based architecture to model the cor-
relation between object image and local clues from the
background image, and generate dense object placement
evaluation >10×faster than previous sliding-window
method [29].
• A sparse contrastive loss to effectively train a dense pre-
diction network with sparse supervision.
• Extensive experiments on a large-scale inpainted dataset
and annotated dataset with state-of-the-art performance.
|
Zhou_OcTr_Octree-Based_Transformer_for_3D_Object_Detection_CVPR_2023 | Abstract
A key challenge for LiDAR-based 3D object detection is
to capture sufficient features from large scale 3D scenes es-
pecially for distant or/and occluded objects. Albeit recent
efforts made by Transformers with the long sequence mod-
eling capability, they fail to properly balance the accuracy
and efficiency, suffering from inadequate receptive fields or
coarse-grained holistic correlations. In this paper, we pro-
pose an Octree-based Transformer, named OcTr , to address
this issue. It first constructs a dynamic octree on the hier-
archical feature pyramid through conducting self-attention
on the top level and then recursively propagates to the level
below restricted by the octants, which captures rich global
context in a coarse-to-fine manner while maintaining the
computational complexity under control. Furthermore, for
enhanced foreground perception, we propose a hybrid po-
sitional embedding, composed of the semantic-aware po-
sitional embedding and attention mask, to fully exploit se-
mantic and geometry clues. Extensive experiments are con-
ducted on the Waymo Open Dataset and KITTI Dataset, and
OcTr reaches newly state-of-the-art results.
| 1. Introduction
3D object detection from point clouds has received ex-
tensive attention during the past decade for its ability to pro-
vide accurate and stable recognition and localization in au-
tonomous driving perception systems. In this task, feature
learning plays a very fundamental and crucial role; yet it is
rather challenging due to not only the disordered and sparse
nature of data sampling, but also to insufficient acquisition
under occlusion or at a distance. To address this issue, many
methods have been proposed, which can be taxonomized
into two major classes, i.e.grid-based and point-based. The
former first regularize point clouds into multi-view images
or voxels and then apply 2D or 3D CNNs to build shape rep-
*indicates the corresponding author.
Local Dilated Windows
Shifted WindowsGlobal Induced Set
(3) Octree ConstructionGlobal
Receptive
FieldFine-grained
Representation(1) Fixed Pattern Limited Receptive Field (2) Set Proxy Limited RepresentationFigure 1. Illustration of three sparsification strategies of attention
matrices. Fixed pattern (1) narrows receptive fields and set proxy
(2) discards elaborate correlations. The proposed octree construc-
tion (3) keeps the global receptive field in a coarse-grained manner
while maintaining fine-grained representations.
resentations [ 4,52], while the latter directly conduct MLP
based networks such as PointNet++ [ 33] and DGCNN [ 50]
on original points for geometry description [ 32,40,42,60].
Unfortunately, they fail to capture necessary context infor-
mation through the small receptive fields in the deep mod-
els, leading to limited results.
Witnessing the recent success of Transformers in NLP,
many studies have investigated and extended such architec-
tures for 3D vision [ 24,29,58,61]. Transformers are re-
puted to model long-range dependencies, delivering global
receptive fields, and to be suitable for scattered inputs of ar-
bitrary sizes. Meanwhile, in contrast to those static weights
that are learned in convolutions, Transformers dynamically
aggregate the input features according to the relationships
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5166
between tokens. Regarding the case in 3D object detection,
compared to point-based Transformers [ 11,61], voxel-based
ones show the superiority in efficiency. However, they tend
to suffer heavy computations when dealing with large scale
scenes because of the quadratic complexity of Transform-
ers, with the underlying dilemma between the grid size and
the grid amount in voxelization. Taking the KITTI dataset
as an example, it is unrealistic for Transformers to operate
on the feature map with the spatial shape of 200×176×5,
which is commonly adopted in most of the detection heads
[38,46,52,55].
More recently, there have appeared an influx of efficient
self-attention model variants that attempt to tackle long se-
quences as input. They generally sparsify the attention ma-
trix by fixed patterns [ 7,23,34], learned patterns [ 21,45] or
a combination of them [ 1,56]. Fixed patterns chunk the in-
put sequence into blocks of local windows or dilation win-
dows, whilst learned patterns determine a notion of token
relevance and eliminate or cluster outliers. Specific to 3D
object detection from point clouds, V oTr [ 24] modifies self-
attention with pre-defined patterns including local windows
and stride dilation ones in a sparse query manner, and the di-
lation mechanism enlarges the receptive field by sampling
attending tokens in a radius. SST [ 9] splits input tokens into
non-overlapping patterns in a block-wise way and enables
window shifting to capture cross-window correlation. De-
spite some improvements reported, they both only achieve
bigger local receptive fields rather than the expected global
ones, and computations still increase rapidly with the ex-
pansion of receptive fields.
Another alternative on self-attention is to take advantage
of a proxy memory bank which has the access to the entire
sequence tokens [ 1,2,56]. By using a small number of in-
duced proxies to compress the whole sequence, it diffuses
the global context efficiently. V oxSet [ 12] adapts Set Trans-
former [ 19] to 3D object detection and exploits an induced
set to model a set-to-set point cloud translation. With the
help of the compressed global proxies and Conv-FFN, it ob-
tains a global receptive field; nevertheless, as they admit, it
is sub-optimal to set only a few latent codes as proxies for a
large 3D scene, prone to impairing the representation of dif-
ferent point cloud structures and their correlations. There-
fore, there remains space for a stronger solution.
In this paper, we present a novel Transformer network,
namely Octree-based Transformer ( OcTr ), for 3D object
detection. We firstly devise an octree-based learnable sparse
pattern, i.e. OctAttn , which meticulously and efficiently en-
codes point clouds of scenes as shown in Fig. 1. The Oc-
tAttn module constructs a feature pyramid by gathering and
applies self-attention to the top level of the feature pyramid
to select the most relevant tokens, which are deemed as the
octants to be divided in the subsequent. When propagating
to the level below, the key/value inputs are restricted by theoctants from the top. Through recursively conducting this
process, OctAttn captures rich global context features by a
global receptive field in a coarse-to-fine manner while re-
ducing the quadratic complexity of vanilla self-attention to
the linear complexity. In addition, for better foreground per-
ception, we propose a hybrid positional embedding, which
consists of the semantic-aware positional embedding and at-
tention mask, to fully exploit geometry and semantic clues.
Thanks to the designs above, OcTr delivers a competitive
trade-off between accuracy and efficiency.
Our contribution is summarized in three-fold:
1. We propose OcTr for voxel-based 3D object detection,
which efficiently learns enhanced representations by a
global receptive field with rich contexts.
2. We propose an octree-based learnable attention sparsi-
fication scheme ( OctAttn ) and a hybrid positional em-
bedding combining geometry and semantics.
3. We carry out experiments on the Waymo Open Dataset
(WOD) and the KITTI dataset and report state-of-the-
art performance with significant gains on far objects.
|
Zhao_Semi-Supervised_Hand_Appearance_Recovery_via_Structure_Disentanglement_and_Dual_Adversarial_CVPR_2023 | Abstract
Enormous hand images with reliable annotations are
collected through marker-based MoCap . Unfortunately,
degradations caused by markers limit their application in
hand appearance reconstruction. A clear appearance re-
covery insight is an image-to-image translation trained with
unpaired data. However, most frameworks fail because
there exists structure inconsistency from a degraded hand
to a bare one. The core of our approach is to first disentan-
gle the bare hand structure from those degraded images and
then wrap the appearance to this structure with a dual ad-
versarial discrimination (DAD) scheme. Both modules take
full advantage of the semi-supervised learning paradigm:
The structure disentanglement benefits from the modeling
ability of ViT, and the translator is enhanced by the dual dis-
crimination on both translation processes and translation
results. Comprehensive evaluations have been conducted
to prove that our framework can robustly recover photo-
realistic hand appearance from diverse marker-contained
and even object-occluded datasets. It provides a novel av-
enue to acquire bare hand appearance data for other down-
stream learning problems.
| 1. Introduction
Both bare hand appearance and vivid hand motion are of
great significance for virtual human creation. A dilemma
hinders the synchronous acquisition of these two: accu-
rate motion capture [20, 27, 68] relies on markers that de-
grade hand appearance, whereas detailed appearance cap-
ture [50, 59, 75] in a markerless setting makes hand motion
hard to track. Is there a win-win solution that guarantees
high fidelity for both?
Existing ones include markerless MoCap [26,83,88] and
graphic rendering [16, 29, 80]. However, the former re-
*Corresponding author. E-mail: [email protected]. This work
was supported in part by the National Natural Science Foundation of China
(No. 62076061), in part by the Natural Science Foundation of Jiangsu
Province (No. BK20220127).
Figure 1. Hand appearance recovery from diverse degrada-
tions . Compared with CycleGAN-based frameworks, we recover
more bare hand appearance while preserving more semantics.
quires a pose estimator [13, 47, 90] trained with laborious
annotations. And the latter often produces artifacts be-
cause it is hard to simulate photo-realistic lighting. An-
other insight is to “translate” the degraded appearances as
bare ones end-to-end. Nevertheless, it is tough to collect
paired data for its training. Moreover, most unsupervised
frameworks [56, 57, 91] are only feasible when the translat-
ing target and source are consistent in structure, while our
task needs to change those marker-related structures in the
source. To this end, our key idea is to first disentangle the
bare hand structure represented by a pixel-aligned map,
and then wrap the appearance on this bare one trained
with a dual adversarial discrimination (DAD) scheme .
There are two strategies to wrap the appearance from one
image to another. (i) Template-based strategies learn [6, 63,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12125
Figure 2. Structure disentanglement from monocular RGBs . (Row-1) Input images. (Row-2) Mesh recovery by a template-based
strategy [90]. (Row-3) Structure prediction by a template-free strategy [76]. (Row-4) Structure prediction by our sketcher w/o/ bare
structure prior. (Row-5) Structure disentanglement by our full sketcher. Red circles indicate the artifacts in the results.
72] or optimize [2, 55] sophisticated wrappings based on
parametric instance templates [59, 62]. However, the accu-
rate estimation of those parameters is heavily influenced by
the degraded appearance in the images (See Fig. 2 Row-2).
(ii) Template-free ones [40,73] excel at visible feature wrap-
pings between structure-consistent images but are unable
to selectively exclude marker-related features (See Fig. 2
Row-3 and Row-4). To address the problem, we first embed
the bare hand structure prior into pixel-aligned maps. Then
this prior is encoded as the token form [9], and a ViT [15]
sketcher is trained to disentangle the corresponding struc-
ture tokens from partial image patches [30]. Interestingly,
this ViT sketcher satisfies S[S(X)] = S(X)[1], which
means that when feeding its output as the input again, the
two outputs should be consistent. We further utilize this
elegant property to intensively train our sketcher in a semi-
supervised paradigm.
Disappointingly, the recovered appearances remain un-
satisfactory when a structure-assisted translator trained with
existing adversarial paradigms: (i) In popular supervised
paradigms [34, 76], the discriminator focuses on the qual-
ity of the translation process. (ii) In most unsupervised
paradigms [5, 52, 56], the discriminator can only evaluate
the translation result since there is no reliable reference for
the translation process. Based on these two, we innovate
the DAD scheme under a semi-supervised paradigm, which
enables dual discrimination (both on the process and re-sult) in our unpaired translation task. Initially, a partner do-
main is synthesized by degrading hand regions of the bare
one. It possesses pairwise mapping relationships with the
bare target domain, as well as similarity to the degraded
source domain. During the translator training, data from
the source and the partner domain are fed to the transla-
tor simultaneously. The two discriminators evaluate those
translation processes and results with a clear division of la-
bor. This scheme is more efficient than most unsupervised
schemes [57, 91] because of those trustworthy pairs. It is
more generalizable than a supervised scheme trained only
with synthetic degradation [42,43,77] because of those mul-
timodal inputs.
Our main contributions are summarized as follows.
•A semi-supervised framework that makes degraded im-
ages in marker-based MoCap regain bare appearance;
•A powerful ViT sketcher that disentangles bare hand
structure without parametric model dependencies;
•An adversarial scheme that promotes the degraded-to-bare
appearance wrapping effectively.
The codes will be publicly available at https://www.
yangangwang.com .
|
Zhou_Texture-Guided_Saliency_Distilling_for_Unsupervised_Salient_Object_Detection_CVPR_2023 | Abstract
Deep Learning-based Unsupervised Salient Object De-
tection (USOD) mainly relies on the noisy saliency pseudo
labels that have been generated from traditional handcraft
methods or pre-trained networks. To cope with the noisy la-
bels problem, a class of methods focus on only easy samples
with reliable labels but ignore valuable knowledge in hard
samples. In this paper, we propose a novel USOD method to
mine rich and accurate saliency knowledge from both easy
and hard samples. First, we propose a Confidence-aware
Saliency Distilling (CSD) strategy that scores samples con-
ditioned on samples’ confidences, which guides the model to
distill saliency knowledge from easy samples to hard sam-
ples progressively. Second, we propose a Boundary-aware
Texture Matching (BTM) strategy to refine the boundaries
of noisy labels by matching the textures around the pre-
dicted boundaries. Extensive experiments on RGB, RGB-
D, RGB-T, and video SOD benchmarks prove that our
method achieves state-of-the-art USOD performance. Code
is available at www.github.com/moothes/A2S-v2 .
| 1. Introduction
Unsupervised Salient Object Detection (USOD) meth-
ods aim to correctly localize and precisely segment salient
objects simultaneously without using manual annotations.
Compared to the supervised methods, USOD methods can
easily adapt to more practical scenarios ( e.g., industrial or
medical images) where a large number of labeled images
may be very hard to collect. Moreover, USOD methods also
can assist some related methods for other tasks, e.g., object
recognition [16,44] and object detection [11,35]. However,
diverse objects, complex backgrounds, and other challeng-
*Corresponding author. This project is supported by the Key-
Area Research and Development Program of Guangdong Province
(2019B010155003), the National Natural Science Foundation of China
(U22A2095, 62072482, 62076258, 62206316), and the Guangdong NSF
Project (2022A1515011254).
(a) Image
(b) GT
(c) RBD
(d) HS
(e) USPS
(f) A2S
(g) Activation
(h) Ours
Figure 1. Saliency maps generated by USOD methods. Our result
is generated based on the activation map of a deep network.
ing conditions bring severe challenges to USOD methods.
Most Deep Learning-based (DL-based) methods [21,29,
32, 55, 63, 66] base on the saliency cues extracted by tra-
ditional SOD methods (Fig. 1-c and 1-d). These hand-
crafted features related cues are employed as pseudo labels
to train deep networks under certain constraints, e.g., bi-
nary cross-entropy (BCE) loss. However, saliency cues by
traditional methods usually shift away from target objects,
especially in complex scenes. Moreover, conventional con-
straints, such as BCE loss, works well on fully-supervised
SOD methods, but is suboptimal when fitting the noisy la-
bels for unsupervised methods (Fig. 1-e). Recently, Zhou
et al. [72] addressed the first issue by extracting saliency
cues (Fig. 1-f) based on a unsupervisedly pre-trained net-
work (Fig. 1-g) instead of using traditional methods. Dur-
ing training, they focus on learning reliable saliency knowl-
edge from easy samples, but ignore latent knowledge in
hard samples. The main reason is that hard samples may
be wrongly-labeled and corrupt the fragile saliency knowl-
edge learned in the early training phase. Therefore, to lever-
age hard samples, we argue that all samples should be em-
ployed in a meaningful order (i.e., from high reliable to low
reliable), which is crucial for mining accurate knowledge
from noisy labels. Trained by such a strategy, the network
can mine valuable knowledge from hard examples without
corrupting the knowledge learned from easy samples.
Deep networks can learn to localize salient regions from
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7257
noisy labels [72], but still struggle to find the precise bound-
aries of target objects. Generally, the appearance around
saliency boundary has a similar texture as in saliency map.
Therefore, matching the textures between different maps
can serve as a guidance for producing reasonable saliency
boundaries. We will demonstrate that above strategies are
applicable to multimodal data besides RGB image, includ-
ing depth map, thermal image, and optical flow.
Based on the above analysis, we propose a novel frame-
work to tackle the Unsupervised Salient Object Detection
(USOD) tasks. Specifically, two strategies are proposed to
mine saliency knowledge from noisy saliency labels. First,
we propose a Confidence-aware Saliency Distilling (CSD)
scheme that scores samples with noisy labels conditioned
on samples’ confidences. Then, our CSD guides the net-
work to learn saliency knowledge from easy samples to
more complex ones progressively by employing an adaptive
loss conditioned on the training progress. Second, we pro-
pose a Boundary-aware Texture Matching (BTM) strategy
to refine the saliency boundaries of noisy labels by matching
the textures around the predicted boundaries. During train-
ing, the predicted saliency boundaries are shifting toward
surrounding edges in the appearance space of the whole im-
age. Finally, guided by above two mechanisms, our method
can produce high-quality pseudo labels to train general-
ized saliency detectors. Extensive experiments on RGB,
RGB-D, RGB-T, and video SOD benchmarks prove that our
method achieves state-of-the-art performance compared to
existing USOD methods.
The main contributions of our novel USOD method are:
1. We propose a Confidence-aware Saliency Distilling
(CSD) to mine rich and accurate saliency knowledge
from noisy labels, which breaks through the limitation
that existing methods cannot utilize hard samples.
2. We propose a Boundary-aware Texture Matching
(BTM) to refine the boundary of the predicted saliency
maps by matching textures in different spaces.
3. Extensive experiments on RGB, RGB-D, RGB-T
and video SOD benchmarks prove that our method
achieves state-of-the-art USOD performance.
|
Zhuang_GKEAL_Gaussian_Kernel_Embedded_Analytic_Learning_for_Few-Shot_Class_Incremental_CVPR_2023 | Abstract
Few-shot class incremental learning (FSCIL) aims to
address catastrophic forgetting during class incremental
learning in a few-shot learning setting. In this paper, we
approach the FSCIL by adopting analytic learning, a tech-
nique that converts network training into linear problems.
This is inspired by the fact that the recursive implemen-
tation (batch-by-batch learning) of analytic learning gives
identical weights to that produced by training on the en-
tire dataset at once. The recursive implementation and the
weight-identical property highly resemble the FSCIL setting
(phase-by-phase learning) and its goal of avoiding catas-
trophic forgetting. By bridging the FSCIL with the ana-
lytic learning, we propose a Gaussian kernel embedded an-
alytic learning (GKEAL) for FSCIL. The key components
of GKEAL include the kernel analytic module which allows
the GKEAL to conduct FSCIL in a recursive manner, and
the augmented feature concatenation module that balances
the preference between old and new tasks especially effec-
tively under the few-shot setting. Our experiments show that
the GKEAL gives state-of-the-art performance on several
benchmark datasets.
| 1. Introduction
Class-incremental learning (CIL) [20] can continuously
absorbs new category knowledge in a phase-by-phase man-
ner with data coming separately in each phase, after training
a classification network. This is important as data can be
scattered at various times and locations in a non-identical
independent way. The few-shot class incremental learning
(FSCIL) [23] further imposes an inefficiency constraint on
the data availability. That is, only a few data samples, i.e.,
few-shot, for each new class is allowed, leading to a more
challenging incremental learning problem.
The major challenge for FSCIL follows from the CIL’s,
namely the catastrophic forgetting . The performance on old
Training for phase 0
Training with entire data together
…
Ultimate goal of CIL is to achieve the equality
Data
Batch 0Weight trained with recursive analytic learning
Equal weight (weight -invariant property)
Can we introduce
weight -invariant property
to the CIL realm?
Data
Batch 1…
Data
Batch k
Class -incremental learning protocol
Data
Batch 1
Data
Batch 1…
Data
Batch 0
Training for phase 1
Training for phase K
…Weight trained with analytic learningHigh resemblanceFigure 1. The resemblance between the analytic learning (recur-
sive form) [33] and incremental learning. We want to build a
bridge between these two fields to take advantage of the analytic
learning for addressing the FSCIL.
(base) tasks is tremendously discounted after learning new
tasks. This is caused by the lack of training data for old
tasks, tricking models to focus only on new tasks. The for-
getting issue is also referred to as task-recency bias, in favor
of newly learned tasks in prediction. The forgetting issue in
FSCIL manifests more quickly due to over-fitting than that
in the conventional CIL setting as the training samples be-
come scarce for new tasks.
To handle the forgetting, conventional CIL sparks
various contributions, which mainly include the Bias
correction-based CIL [1,9], Regularization-based CIL [11,
13] and Replay-based CIL [17, 20]. They work well in ad-
dressing the catastrophic forgetting in CIL. However, the
few-shot constraint in FSCIL renders the CIL solutions ob-
solete (see [23] or our experiments). There have been sev-
eral works [22,23,31] taking into account the few-shot con-
straint, outperforming the conventional CIL. These FSCIL
techniques take inspirations from existing CIL variants [22]
or the few-shot learning angle (e.g., prototype-based [31])
to present catastrophic forgetting.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7746
In this paper, inspired by analytic learning [33,
34]—a technique converting network training into linear
problems—we approach the FSCIL in an unique angle by
incorporating traditional machine learning techniques. The
analytic learning allows the training to be implemented in
a recursive manner where training data are scattered into
multiple batches. Yet the weights trained recursively are
identical to those trained by pouring the entire data in one
go [33]. We may call this weight-invariant (or weight-
identical) property. Such recursive form and its weight-
invariant property highly resemble the incremental learning
paradigm and its objective of avoiding (catastrophic) for-
getting respectively (see Figure 1). Following this intuition,
we propose a Gaussian kernel embedded analytic learning
(GKEAL) for FSCIL. The GKEAL adopts traditional ma-
chine learning tools such as least squares (LS) and matrix
inverse to avoid forgetting. The key contributions are sum-
marized as follows.
We introduce GKEAL by treating the FSCIL as a re-
cursive learning problem to avoid forgetting. We prove that
the GKEAL in the FSCIL setting follows the same weight-
invariant property as that in analytic learning.
To bridge analytic learning into the FSCIL realm, the
GKEAL replaces the classifier at a network’s final layer
with a kernel analytic module (KAM). The KAM contains
a Gaussian kernel embedding process for extracting more
discriminative feature, and an LS solution that allows the
GKEAL to learn new tasks in a recursive manner.
To mitigate the data imbalance between the base and
new tasks, an augmented feature concatenation (AFC) mod-
ule is introduced, which effectively balances the network’s
base-new task preference.
Experiments on benchmark datasets show that the
GKEAL outperforms the state-of-the-art methods by a con-
siderable margin. Ablation study is also provided, giving
thorough analysis of the hyperparameters introduced, as
well as strong supports to our theoretical claims.
|
Zheng_FeatER_An_Efficient_Network_for_Human_Reconstruction_via_Feature_Map-Based_CVPR_2023 | Abstract
Recently, vision transformers have shown great success
in a set of human reconstruction tasks such as 2D/3D hu-
man pose estimation (2D/3D HPE) and human mesh recon-
struction (HMR) tasks. In these tasks, feature map rep-
resentations of the human structural information are of-
ten extracted first from the image by a CNN (such as HR-
Net), and then further processed by transformer to predict
the heatmaps for HPE or HMR. However, existing trans-
former architectures are not able to process these feature
map inputs directly, forcing an unnatural flattening of the
location-sensitive human structural information. Further-
more, much of the performance benefit in recent HPE and
HMR methods has come at the cost of ever-increasing com-
putation and memory needs. Therefore, to simultaneously
address these problems, we propose FeatER, a novel trans-
former design that preserves the inherent structure of fea-
ture map representations when modeling attention while re-
ducing memory and computational costs. Taking advan-
tage of FeatER, we build an efficient network for a set of
human reconstruction tasks including 2D HPE, 3D HPE,
and HMR. A feature map reconstruction module is applied
to improve the performance of the estimated human pose
and mesh. Extensive experiments demonstrate the effective-
ness of FeatER on various human pose and mesh datasets.
For instance, FeatER outperforms the SOTA method Mesh-
Graphormer by requiring 5% of Params and 16% of MACs
on Human3.6M and 3DPW datasets. The project webpage
ishttps://zczcwh.github.io/feater_page/ .
| 1. Introduction
Understanding human structure from monocular images
is one of the fundamental topics in computer vision. The
corresponding tasks of Human Pose Estimation (HPE) and
Human Mesh Reconstruction (HMR) have received a grow-ing interest from researchers, accelerating progress toward
various applications such as VR/AR, virtual try-on, and AI
coaching. However, HPE and HMR from a single image
still remain challenging tasks due to depth ambiguity, oc-
clusion, and complex human body articulation.
Ground-Truth
joint coordinates
...Gaussian kernel
on each joint
GT heatmap
of right ankleGT heatmap
of left wrist
Figure 1. Generating heatmaps from joint coordinates.
With the blooming of deep learning techniques, Con-
volutional Neural Network (CNN) [10, 32, 33] architec-
tures have been extensively utilized in vision tasks and
have achieved impressive performance. Most existing HPE
and HMR models [13, 33] utilize CNN-based architectures
(such as ResNet [10] and HRNet [33]) to predict fea-
ture maps, which are supervised by the ground-truth 2D
heatmap representation (encodes the position of each key-
point into a feature map with a Gaussian distribution) as
shown in Fig. 1. This form of output representation and
supervision can make the training process smoother, and
therefore has become the de facto process in HPE’s net-
works [1, 33, 43].
Recently, the transformer architecture has been fruitfully
adapted from the field of natural language processing (NLP)
into computer vision, where it has enabled state-of-the-art
performance in HPE and HMR tasks [23–25, 38, 47]. The
transformer architecture demonstrates a strong ability to
model global dependencies in comparison to CNNs via its
self-attention mechanism. The long-range correlations be-
tween tokens can be captured, which is critical for modeling
the dependencies of different human body parts in HPE and
HMR tasks. Since feature maps concentrate on certain hu-
man body parts, we aim to utilize the transformer architec-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13945
ture to refine the coarse feature maps (extracted by a CNN
backbone). After capturing the global correlations between
human body parts, more accurate pose and mesh can be ob-
tained.
However, inheriting from NLP where transformers em-
bed each word to a feature vector, Vision Transformer ar-
chitectures such as ViT [8] can only deal with the flattened
features when modeling attention. This is less than ideal for
preserving the structural context of the feature maps dur-
ing the refinement stage (feature maps with the shape of
[n, h, w ]need to be flattened as [n, d], where d=h×w.
Herenis the number of feature maps, handware height
and width of each feature map, respectively). Furthermore,
another issue is that the large embedding dimension caused
by the flattening process makes the transformer computa-
tionally expensive. This is not suitable for real-world ap-
plications of HPE and HMR, which often demand real-time
processing capabilities on deployed devices (e.g. AR/VR
headsets).
Therefore, we propose a Feature map-based trans-
formER (FeatER) architecture to properly refine the coarse
feature maps through global correlations of structural in-
formation in a resource-friendly manner. Compared to the
vanilla transformer architecture, FeatER has two advan-
tages:
• First, FeatER preserves the feature map representation
in the transformer encoder when modeling self-attention,
which is naturally adherent with the HPE and HMR tasks.
Rather than conducting the self-attention based on flat-
tened features, FeatER ensures that the self-attention is
conducted based on the original 2D feature maps, which
are more structurally meaningful. To accomplish this,
FeatER is designed with a novel dimensional decompo-
sition strategy to handle the extracted stack of 2D feature
maps.
• Second, this decompositional design simultaneously pro-
vides a significant reduction in computational cost com-
pared with the vanilla transformer1. This makes FeatER
more suitable for the needs of real-world applications.
Equipped with FeatER, we present an efficient frame-
work for human representation tasks including 2D HPE,
3D HPE, and HMR. For the more challenging 3D HPE and
HMR portion, a feature map reconstruction module is inte-
grated into the framework. Here, a subset of feature maps
are randomly masked and then reconstructed by FeatER, en-
abling more robust 3D pose and mesh predictions for in-the-
1For example, there are 32 feature maps with overall dimension
[32 ,64,64]. For a vanilla transformer, without discarding information,
the feature maps need to be flattened into [32 ,4096] . One vanilla trans-
former block requires 4.3G MACs. Even if we reduce the input size to
[32 ,1024] , it still requires 0.27G MACs. However, given the original in-
put of [32 ,64,64], FeatER only requires 0.09G MACs.wild inference. We conduct extensive experiments on hu-
man representation tasks, including 2D human pose estima-
tion on COCO, 3D human pose estimation and human mesh
reconstruction on Human3.6M and 3DPW datasets. Our
method (FeatER) consistently outperforms SOTA methods
on these tasks with significant computation and memory
cost reduction (e.g. FeatER outperforms MeshGraphormer
[25] with only requiring 5% of Params and 16% of MACs).
|
Son_SinGRAF_Learning_a_3D_Generative_Radiance_Field_for_a_Single_CVPR_2023 | Abstract
Generative models have shown great promise in syn-
thesizing photorealistic 3D objects, but they require large
amounts of training data. We introduce SinGRAF , a 3D-
aware generative model that is trained with a few input
images of a single scene. Once trained, SinGRAF gener-
ates different realizations of this 3D scene that preserve the
appearance of the input while varying scene layout. For
this purpose, we build on recent progress in 3D GAN ar-
chitectures and introduce a novel progressive-scale patch
discrimination approach during training. With several ex-
periments, we demonstrate that the results produced by Sin-
GRAF outperform the closest related works in both quality
and diversity by a large margin.
| 1. Introduction
Creating a new 3D asset is a laborious task, which often
requires manual design of triangle meshes, texture maps,
and object placements. As such, numerous methods were
proposed to automatically create diverse and realistic varia-
tions of existing 3D assets. For example, procedural model-
ing techniques [11,27] produce variations in 3D assets given
predefined rules and grammars, and example-based model-
ing methods [13, 21] combine different 3D components to
generate new ones.
With our work, we propose a different, generative strat-
egy that is able to create realistic variations of a single 3D
scene from a small number of photographs. Unlike existing
3D generative models, which typically require 3D assets as
input [13,51], our approach only takes a set of unposed im-
ages as input and outputs a generative model of a single 3D
scene, represented as a neural radiance field [31].
Our method, dubbed SinGRAF, builds on recent
progress in unconditional 3D-aware GANs [5,40] that train
generative radiance fields from a set of single-view images.
However, directly applying these 3D GANs to our problem
*Equal contribution.
Project page: computationalimaging.org/publications/singraf/
Figure 1. SinGRAF generates different plausible realizations of a
single 3D scene from a few unposed input images of that scene.
In this example, i.e., the “office 3” scene, we use 100 input im-
ages, four of which are shown in the top row. Next, we visualize
four realizations of the 3D scene as panoramas, rendered using the
generated neural radiance fields. Note the variations in scene lay-
out, including chairs, tables, lamps, and other parts, while staying
faithful to the structure and style of the input images.
is challenging, because they typically require a large train-
ing set of diverse images and often limit their optimal oper-
ating ranges to objects, rather than entire scenes. SinGRAF
makes a first attempt to train a 3D generative radiance field
for individual indoor 3D scenes, creating realistic 3D varia-
tions in scene layout from unposed 2D images.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8507
Intuitively, our method is supervised to capture the inter-
nal statistics of image patches at various scales and generate
3D scenes whose patch-based projections follow the input
image statistics. At the core of our method lies continuous-
scale patch-based adversarial [14] training. Our radiance
fields are represented as triplane feature maps [4, 43] pro-
duced by a StyleGAN2 [22] generator. We volume-render
our generated scenes from randomly sampled cameras with
varying fields of view , to simulate the appearance of im-
age patches at various scales. A scale-aware discrimina-
tor is then used to compute an adversarial loss to the real
and generated 2D patches to enforce realistic patch dis-
tributions across all sampled views. Notably, our design
of continuous-scale patch-based generator and discrimina-
tor allows patch-level adversarial training without expen-
sive hierarchical training [41, 51, 54]. During the training,
we find applying perspective augmentations to the image
patches and optimizing the camera sampling distribution to
be important for high-quality scene generation.
The resulting system is able to create plausible 3D vari-
ations of a given scene trained only from a set of unposed
2D images of that scene. We demonstrate our method on
two challenging indoor datasets of Replica [47] and Matter-
port3D [6] as well as a captured outdoor scene. We evaluate
SinGRAF against the state-of-the-art 3D scene generation
methods, demonstrating its unique ability to induce realis-
tic and diverse 3D generations.
|
Takashima_Visual_Atoms_Pre-Training_Vision_Transformers_With_Sinusoidal_Waves_CVPR_2023 | Abstract
Formula-driven supervised learning (FDSL) has been
shown to be an effective method for pre-training vision
transformers, where ExFractalDB-21k was shown to exceed
the pre-training effect of ImageNet-21k. These studies also
indicate that contours mattered more than textures when
pre-training vision transformers. However, the lack of a
systematic investigation as to why these contour-oriented
synthetic datasets can achieve the same accuracy as real
datasets leaves much room for skepticism. In the present
work, we develop a novel methodology based on circular
harmonics for systematically investigating the design space
of contour-oriented synthetic datasets. This allows us to
efficiently search the optimal range of FDSL parameters
and maximize the variety of synthetic images in the dataset,
which we found to be a critical factor. When the resulting
new dataset VisualAtom-21k is used for pre-training ViT-
Base, the top-1 accuracy reached 83.7% when fine-tuning
on ImageNet-1k. This is only 0.5% difference from the top-1
accuracy (84.2%) achieved by the JFT-300M pre-training,
even though the scale of images is 1/14. Unlike JFT-300M
which is a static dataset, the quality of synthetic datasets
will continue to improve, and the current work is a testa-
ment to this possibility. FDSL is also free of the common
issues associated with real images, e.g. privacy/copyright
issues, labeling costs/errors, and ethical biases.
| 1. Introduction
Vision transformers [ 10] have made a significant im-
pact on the entire field of computer vision, and state of
the art models in classification [ 37,41,42], object detec-
tion [ 25,36], and segmentation [ 8,15,23,36] are now based
on vision transformers. The accuracy of vision transform-
ers exceeds that of convolutional neural networks by a con-
siderable margin when the model is pre-trained on huge
datasets, such as JFT-300M [ 32].However, theJFT-300M
dataset contains 300M images and 375M labels. It is impos-
sin(12𝜃)sin(4𝜃)WaveΦVisual waveV𝑂,ΦQuantized visual wavesElectron waves
Visual atomDe Broglie’s atomic model
(a) The visual atomic renderer
(b) Comparison of VisualAtom and JFT-300MpJFT-300Mp300M ImagespHuman Supervisionp84.2 %@ ImageNet-1k
pVisualAtom(ours)p21M Images (x14 smaller!)pFormula Supervisionp83.7 %@ ImageNet-1kReal ImageSynthetic ImageFigure 1. VisualAtom: a new FDSL dataset (a) Inspired by de
Broglie’s atomic model, we propose VisualAtom dataset contain-
ing shapes from two sinusoidal waves. (b) When fine-tuning on
ImageNet-1k, ViT-B pre-trained with VisualAtom achieved the ac-
curacy of only 0.5% lower than JFT-300M using 1/14 images.
sible to manually label all of these images. Efforts to auto-
matically label such datasets is still not as accurate as man-
ual labeling. Self-supervised learning (SSL) is increasing
in popularity, as datasets do not need to be labeled for this
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18579
mode of training [ 16].Although SSL removes the burden of
labeling large datasets, the effort to collect/download, store,
and load these large datasets remains a challenge.
One of the major issues in computer vision is that ac-
cess to huge datasets, such as JFT-300M/3B [ 32,42] and
IG-3.5B [ 27], is limited to certain institutions. This makes it
difficult for the rest of the community to build upon, or even
reproduce, existing work s.This limitation has prompted the
creation of open datasets, such as LAION-5B [ 30].How-
ever, the LAION dataset is not curated, which means that it
could contain inappropriate content, or be subject to soci-
etal bias and/or privacy/copyright issues [ 5,39,40]. How to
curate such large datasets from the perspective of AI ethics
and safety is an open area of research, but, in the meantime,
an alternative approach to creating large datasets for com-
puter vision is needed.
Formula-Driven Supervised Learning (FDSL) [ 19,20]
has been proposed as an alternative to supervised learn-
ing (SL) and SSL with real images. The term “formula-
driven” encompasses a variety of techniques for generating
synthetic images from mathematical formulae. The ratio-
nale here is that, during the pre-training of vision trans-
formers, feeding such synthetic patterns are sufficient to ac-
quire the necessary visual representations. These images
include various types of fractals [ 1,17,19,20,28], geomet-
ric patterns [ 18], polygons and other basic shapes [ 17]. The
complexity and smoothness of these shapes can be adjusted
along with the brightness, texture, fill-rate, and other fac-
tors that affect therendered image. Labels can be assigned
based on the combination of any of these factors, so a la-
beled dataset of arbitrary quantity can be generated without
human intervention. Furthermore, there is close-to-zero risk
of generating images with ethical implications, such as so-
cietal bias or copyright infringement.
Another major advantage of synthetic datasets is that
the quality of images can be improved continuously, unlike
natural datasets which can only be enhanced in quantity.
Therefore, we anticipate that we could eventually create a
synthetic image dataset that surpass the pre-training effect
of JFT-300M, by understanding which properties of syn-
thetic images that contribute to pre-training and improving
them. Nakashima et al. [28] used fractal images to pre-train
vision transformers and found that the attention maps tend
to focus on the contours (outlines) rather than the textures.
Kataoka et al. [17] verified the importance of contours by
creating a new dataset from well-designed polygons, and
exceeded the pre-training effect of ImageNet-21k with this
dataset that consists of only contours. These studies indi-
cate that contours are what matter when pre-training vision
transformers. However, these studies covered only a lim-
ited design space, due to the difficulty of precisely control-
ling the geometric properties of the contours in each image.
The present study aims to conduct a systematic and thor-ough investigation of the design space of contour-oriented
synthetic images. We systematically investigate the design
space of contours by expressing them as a superposition of
sinusoidal waves onto ellipses, as shown in Figure 1a. In the
same way that a Fourier series can express arbitrary func-
tions, we can express any contour shape with such a super-
position of waves onto an ellipse. Such geometrical con-
cepts have appeared in classical physics, e.g.de Broglie’s
atomic model [ 6]. Therefore, we name this new dataset
“VisualAtom”, and the method to generate the images “vi-
sual atomic renderer”. The visual atomic renderer allows us
to exhaustively cover the design space of contour-oriented
synthetic images, by systematically varying the frequency,
amplitude, and phase of each orbit, along with the number
of orbits and degree of quantization the orbits. We vary the
range of these parameters to generate datasets with different
variety of images. We found that variety of contour shapes
is a crucial factor for achieving a superior pre-training ef-
fect. Our resulting dataset was able to nearly match the pre-
training effect of JFT-300M when fine-tuned on ImageNet-
1k, while using only 21M images. We summarize the con-
tributions as follows:
Investigative contribution (Figure 1a):We propose a
novel methodology based on circular harmonics that allows
us to systematically investigate the design space of contour-
oriented synthetic datasets. Identifying the optimal range of
frequency, amplitude, and quantization of the contours lead
to the creation of a novel synthetic dataset VisualAtom with
unprecedented pre-training effect on vision transformers.
Experimental contribution (Figure 1b):We show that
pre-training ViT-B with VisualAtom can achieve compara-
ble accuracy to pre-training on JFT-300M, when evaluated
on ImageNet-1k fine-tuning. Notably, the number of im-
ages used to achieve this level of accuracy was approxi-
mately 1/14 of JFT-300M. We also show that VisualAtom
outperforms existing state-of-the-art FDSL methods.
Ethical contribution: We will release the synthesized im-
age dataset, pre-trained models, and the code to generate
the dataset. This will also allow users with limited inter-
net bandwidth to generate the dataset locally. Moreover,
the dataset and model will be released publicly as a com-
mercially available license and not limited to educational or
academic usage.
|
Suo_S3C_Semi-Supervised_VQA_Natural_Language_Explanation_via_Self-Critical_Learning_CVPR_2023 | Abstract
VQA Natural Language Explanation (VQA-NLE) task
aims to explain the decision-making process of VQA mod-
els in natural language. Unlike traditional attention or gra-
dient analysis, free-text rationales can be easier to under-
stand and gain users’ trust. Existing methods mostly use
post-hoc or self-rationalization models to obtain a plau-
sible explanation. However, these frameworks are bottle-
necked by the following challenges: 1) the reasoning pro-
cess cannot be faithfully responded to and suffer from the
problem of logical inconsistency. 2) Human-annotated ex-
planations are expensive and time-consuming to collect. In
this paper, we propose a new Semi-Supervised VQA-NLE
via Self-Critical Learning ( S3C), which evaluates the can-
didate explanations by answering rewards to improve the
logical consistency between answers and rationales. With
a semi-supervised learning framework, the S3Ccan ben-
efit from a tremendous amount of samples without human-
annotated explanations. A large number of automatic mea-
sures and human evaluations all show the effectiveness of
our method. Meanwhile, the framework achieves a new
state-of-the-art performance on the two VQA-NLE datasets.
| 1. Introduction
Deep neural networks have enabled significant break-
throughs in a variety of vision-language (VL) tasks such
as image captioning [10, 47] and visual question answer-
ing (VQA) [2, 39]. Unfortunately, most of them are black
box systems, which makes it challenging to gain users’
trust [20]. Explaining the decision-making process of
deep VL models is a long-standing and essential problem.
*These authors contributed equally to this work.
†Corresponding authors.
Q:Howmanypeoplewilldineatthistable?Answer:OneReason:Thereisonecup.
VisualEncoder
LanguageEncoderclsE1E2VLPretrainedModelNaturalLanguageExplainerVisualEncoder
LanguageEncoderclsE1E2one,thereisonlyonecupofwater...
clsE1E2𝑅1:Thereisonlyonecupofwater.𝑅2:Thereisonlyonedishattable...(a)Post-hocexplanationMethod
(b)Self-rationalizationMethod
(c)OurMethodReward or PunishmentVLandExplainerModelAnswerScoresVisualEncoderLanguageEncoderlabelledsamplesunlabelledsamplesQ:Howmanypeoplewilldineatthistable?Q:Howmanypeoplewilldineatthistable?VLandExplainerModelFigure 1. Paradigm comparison of different VQA-NLE meth-
ods. (a) Post-hoc explanation method adopts two independent
models to predict answers and explanations respectively. (b) Self-
rationalization method uses a united VL model to simultaneously
generate answers and explanations. (c) Our self-critical strategy
utilizes answer scores as rewards and obtains more reliable ratio-
nales with semi-supervised learning.
Some approaches depend on attention mechanisms [2, 30]
or gradient-based localization [50] to acquire visual expla-
nations, which can highlight some contributing image re-
gions for the predicted answers. However, simple visualiza-
tion cannot explain how these areas support the answers and
they are also hard to comprehend [20,48]. Conversely, Nat-
ural Language Explanation (NLE) task [6, 38] can explain
the decision-making process of a model by generating a nat-
ural language sentence. The language-based explanations
are more accessible for users to understand, and they can
also help researchers optimize the structure of models [34].
Recently, some models of NLE in the VL commu-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2646
nity have achieved pretty-well results, especially for VQA-
NLE [20,34,41,48,58]. They can guide models to generate
natural language sentences and interpret how the models get
answers. Specifically, the first research line usually treats
VQA-NLE as a predict-then-explain task [20, 34, 41, 58],
namely post-hoc explanations method. As shown in Fig. 1
(a), these methods first depend on pre-trained VL models
(such as UNITER [8] or Oscar [25]) to gain answers. Then
the fused multi-modal features and the predicted answers
are fed into a separated language model ( e.g., LSTM [16]
or Transformer [54]) to generate corresponding explana-
tions. As shown in Fig. 1 (b), the other line [48] relies on a
united VL model while generating both answers and expla-
nations, which is known as the self-rationalization method.
This framework can simultaneously predict an answer and
generate a rationale by formulating the answer as a text-
generation task along with the explanation.
Though significant progress has been made, the two
paradigms are still restricted by the following challenges:
1) For the first paradigm, since the decision-making model
and interpretation part are two separate modules, it would
inevitably lead to unfaithful responses to the reasoning pro-
cess of the decision models. 2) Due to the lack of explic-
itly logical relationship modeling, previous work [19] has
proved that the straightforward self-rationalization frame-
works suffer from the problem of logical inconsistency.
3) The above strategies all require an amount of human-
annotated explanations, which are expensive and time-
consuming to collect [62].
To solve the above challenges, inspired by [5, 51], we
argue that a reasonable rationale can assist the model in ob-
taining a correct answer, and vice versa, the answer can be
converted as an evaluation criterion for possible explana-
tions. In this paper, we propose a new Semi-Supervised
VQA-NLE method with Self-Critical learning, which is
called S3Cfor short. As shown in Fig.1 (c), given im-
ages and related questions, we first leverage a prompting
mechanism to construct answer and explanation templates,
which can guide the pre-trained VL model to generate an-
swers and multiple candidate explanations based on se-
quence sampling [2]. Then we design a new self-critical
method that converts the answer scores as rewards and en-
courages the model to generate the explanations which con-
tribute to improving the answer scores. In particular, to
reduce the dependency on expensive human annotations,
we further extend our method to the semi-supervised ver-
sion, which utilizes the unlabelled samples1(i.e., conven-
tional VQA data [4, 36]) to significantly enhance the self-
interpretability of the model. With the self-critical strategy
and the semi-supervised learning, our method effectively
models the logical relationships and promotes the logical
1In this paper, we use “unlabelled samples” and “labelled samples” to
indicate the question-answer (QA) pairs without/with human explanations.consistency between answer-explanation pairs. According
to automatic measures and human evaluations, the S3C
outperforms the state-of-the-art models for the VQA-NLE
task on the widely used two datasets and provides a new
paradigm for our community. In summary, we make the
following contributions:
1) We propose a new self-critical VQA-NLE method
that can model the logical relationships between answer-
explanation pairs and evaluate the generated rationales by
answering rewards. This strategy effectively improves the
logical consistency and the reliability of the interpretations.
2) We develop an advanced semi-supervised learning
framework for VQA-NLE, which utilizes amounts of sam-
ples without human-annotated explanations to boost the
self-interpretability of the model further. To the best of
our knowledge, we are the first to explore semi-supervised
learning on the VQA Natural Language Explanation.
3) The proposed S3Cachieves new state-of-the-art per-
formance on VQA-X [13] and A-OKVQA [49] benchmark
datasets. Meanwhile, automatic measures and human eval-
uations all show the effectiveness of our method.
|
Song_DIFu_Depth-Guided_Implicit_Function_for_Clothed_Human_Reconstruction_CVPR_2023 | Abstract
Recently, implicit function (IF)-based methods for
clothed human reconstruction using a single image have
received a lot of attention. Most existing methods rely on
a 3D embedding branch using volume such as the skinned
multi-person linear (SMPL) model, to compensate for the
lack of information in a single image. Beyond the SMPL,
which provides skinned parametric human 3D information,
in this paper, we propose a new IF-based method, DIFu,
that utilizes a projected depth prior containing textured
and non-parametric human 3D information. In particu-
lar, DIFu consists of a generator, an occupancy prediction
network, and a texture prediction network. The generator
takes an RGB image of the human front-side as input, and
hallucinates the human back-side image. After that, depth
maps for front/back images are estimated and projected into
3D volume space. Finally, the occupancy prediction net-
work extracts a pixel-aligned feature and a voxel-aligned
feature through a 2D encoder and a 3D encoder, respec-
tively, and estimates occupancy using these features. Note
that voxel-aligned features are obtained from the projected
depth maps, thus it can contain detailed 3D information
such as hair and cloths. Also, colors of each query point
are also estimated with the texture inference branch. The
effectiveness of DIFu is demonstrated by comparing to re-
cent IF-based models quantitatively and qualitatively.
| 1. Introduction
In order to implement virtual reality and an immersive
metaverse environment, a method of reconstructing a realis-
tic human avatar is an important technology. In particular, if
there are methods that can create a complete 3D model with
only a single view image without specialized devices such
* Corresponding author.
†This work was done while the first author was pursuing his Master’s
degree at Chungnam National University.
Project page is at https://eadcat.github.io/DIFu
Figure 1. (a) Front/back color images. (b) Parametric model vol-
ume. (c) Depth maps. (d) Projected depth volume.
as 3D scanning, it will be highly useful in various fields
such as education, video conference, and entertainment.
Recently, there have been approaches to clothed human re-
construction using a single-view image based on the im-
plicit function (IF) [1,3,9,10,12,13,22,34,35,43,53]. While
IF-based methods have shown promising results thus far,
their performance is limited in unobservable parts. Also, IF-
based methods often produce over-smoothed results, partic-
ularly in intricate areas such as clothing and hair. Without
proper conditions for occluded parts, clothed human recon-
struction is still an open and highly ill-posed problem.
To overcome the aforementioned issue, there are sev-
eral attempts using parametric models [17, 23, 30, 44] to
provide geometric patterns of the human. Leveraging
these benefits, Zheng et al . [53] proposed the parametric
model-conditioned implicit representation (PaMIR). Using
the skinned multi-person linear (SMPL) voxel from pre-
trained GCMR [19], PaMIR extracts 3D geometric features
to overcome depth ambiguity. Also, Xiu et al . [43] pro-
posed a method using the signed distance from the skinned
model to the query points. Their approach helps approxi-
mate the distance from the skinned model to the target sur-
face. While the skinned model can provide global and pose
information to condition occluded parts, it may struggle to
estimate surface that is far from the skin, such as long hair
or skirts. As shown in Figure 1-(b), significant discrepan-
cies exist between the detailed surface shape of the skinned
model and the target model. Considering that the parametric
model-based methods are trained by losses on the sampled
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8738
query points, the over-smoothing becomes more severe.
Therefore, in this paper, we propose a new IF-based
method using projected depth maps. Specifically, our
method uses a generator to make a back-side color image
and front-/back-side depth maps from a front input image.
Then, we project the depth maps into 3D volume space as
shown in Figure 1-(d). All information, including RGB im-
ages, depth maps and projected depths are passed into the
occupancy prediction network to predict the occupancy of
each query point. The voxel-aligned features extracted from
the 3D encoder in the occupancy prediction network are
derived from projected depth maps rather than the SMPL
model. As a result, they are more effective at conveying 3D
information about the detailed surfaces of the target. To ob-
tain the final 3D mesh, the marching cubes algorithm [24] is
applied to these occupancies. Similar to the occupancy pre-
diction network, we can estimate the colors of each query
point via the texture inference network.
|
Tang_ABLE-NeRF_Attention-Based_Rendering_With_Learnable_Embeddings_for_Neural_Radiance_Field_CVPR_2023 | Abstract
Neural Radiance Field (NeRF) is a popular method in
representing 3D scenes by optimising a continuous volumet-
ric scene function. Its large success which lies in applying
volumetric rendering (VR) is also its Achilles’ heel in pro-
ducing view-dependent effects. As a consequence, glossy
and transparent surfaces often appear murky. A remedy
to reduce these artefacts is to constrain this VR equation
by excluding volumes with back-facing normal. While this
approach has some success in rendering glossy surfaces,
translucent objects are still poorly represented. In this pa-
per, we present an alternative to the physics-based VR ap-
proach by introducing a self-attention-based framework on
volumes along a ray. In addition, inspired by modern game
engines which utilise Light Probes to store local lighting
passing through the scene, we incorporate Learnable Em-
beddings to capture view dependent effects within the scene.
Our method, which we call ABLE-NeRF , significantly re-
duces ‘blurry’ glossy surfaces in rendering and produces
realistic translucent surfaces which lack in prior art. In the
Blender dataset, ABLE-NeRF achieves SOTA results and
surpasses Ref-NeRF in all 3 image quality metrics PSNR,
SSIM, LPIPS.
| 1. Introduction
Neural Radiance Field (NeRF) has become the de facto
method for 3D scene representation. By representing the
scene as a continuous function, NeRF is able to generate
photo-realistic novel view images by marching camera rays
through the scene. NeRF first samples a set of 3D points
along a camera ray and outputs its outgoing radiance. The
final pixel colour of a camera ray is then computed us-
ing volumetric rendering (VR) which colours are alpha-
composited. This simple approach allows NeRF to gen-
erate impressive photo-realistic novel views of a complex
3D scene. However, NeRF is unable to produce accurate
colours of objects with view-dependent effects. Colours of
Figure 1. We illustrate two views of the Blender ’Drums’ Scene.
The surface of the drums exhibit either a translucent surface or
a reflective surface at different angles. As shown, Ref-NeRF
model has severe difficulties interpolating between the translu-
cent and reflective surfaces as the viewing angle changes. Our
method demonstrates its superiority over NeRF rendering models
by producing such accurate view-dependent effects. In addition,
the specularity of the cymbals are rendered much closer to ground
truth compared to Ref-NeRF.
translucent objects often appear murky and glossy objects
have blurry specular highlights. Our work aims to reduce
these artefacts.
The exhibited artefacts of the NeRF rendering model is
largely due to the inherent usage of VR as features are ac-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16559
cumulated in the colour space. Variants of NeRF attempt to
tackle this defect by altering the basis of this VR equation.
For instance, Ref-NeRF first predicts the normal vector of
each point on the ray. If a point has a predicted normal fac-
ing backwards from the camera, its colour is excluded from
computation via regularisation. However, prediction of nor-
mals in an object’s interior is ill-posed since these points
are not on actual surfaces. As a consequence, Ref-NeRF
achieves some success over the baseline NeRF model, al-
beit imperfectly.
When rendering translucent objects with additional
specular effects , NeRF and its variants suffer from the same
deficiency. This is due to the computation of σwhich is
analogous to the ‘opacity’ attribute of a point used in the VR
equation. It is also related to the point’s transmissivity and
its contribution of radiance of to its ray. As per the Fresnel
effect [5], this property should depend on viewing angles.
Similarly, [19] describes a notion of ‘alphasphere’ , which
describes an opacity hull of a point that stores an opacity
value viewed at direction ω. Most NeRF methods disregard
the viewing angle in computing σ. In fig. 1, the surface
of the uttermost right drum in the Blender scene exhibits
changing reflective and translucent properties at different
viewing angles. Ref-Nerf and other variants, by discount-
ing the dependency of σon viewing angle, may not render
accurate colours of such objects.
Additionally, learning to model opacity and colour sepa-
rately may be inadequate in predicting the ray’s colour. Ac-
cumulating high-frequency features directly in the colour
space causes the model to be sensitive to both opacity and
sampling intervals of points along the ray. Therefore we re-
work how volumetric rendering can be applied to view syn-
thesis. Inspecting the VR equation reveals that this method-
ology is similar to a self-attention mechanism; a point’s
contribution to its ray colour is dependent on points lying
in-front of it. By this principle we designed ABLE-NeRF
as an attention-based framework. To mimic the VR equa-
tion, mask attention is applied to points, preventing them
from attending to others behind it.
The second stage of ABLE-NeRF takes inspiration from
modern game engines in relighting objects by invoking
a form of memorisation framework called ‘baking’. In
practice, traditional computer graphics rendering methods
would capture indirect lighting by applying Monte Carlo
path tracing to cache irradiance and then apply interpola-
tion during run-time. Similarly, game engines would use
lightmaps to cache global illumination for lower computa-
tional costs. For relighting dynamic objects, localised light
probes are embedded in the scene to capture light passing
through free space. At run-time, moving objects query from
these light probes for accurate relighting. The commonal-
ity between all these approaches is the process of ‘memo-
rising’ lighting information and interpolating them duringrun time for accurate relighting. As such, we take inspi-
ration from these methods by creating a memorisation net-
work for view synthesis. Given a static scene, we incorpo-
rate Learnable Embeddings (LE), which are learnable mem-
ory tokens, to store scene information in latent space during
training. Specifically, the LE attends to points sampled dur-
ing ray casting via cross-attention to memorise scene infor-
mation. To render accurate view dependent effects a direc-
tional view token, comprising of camera pose, would de-
code from these embeddings.
ABLE-NeRF provides high quality rendering on novel
view synthesis tasks. The memorisation network achieves
significant improvements in producing precise specular ef-
fects over Ref-NeRF. Moreover, by reworking volumetric
rendering as an attention framework, ABLE-NeRF renders
much more accurate colours of translucent objects than
prior art. On the blender dataset, ABLE-NeRF excels both
quantitatively and qualitatively relative to Ref-NeRF.
In summary, our technical contributions are:
(1) An approach demonstrating the capability and superi-
ority of transformers modelling a physics based volumetric
rendering approach.
(2) A memorisation based framework with Learnable
Embeddings (LE) to capture and render detailed view-
dependent effects with a cross-attention network.
|
Tang_Weakly_Supervised_Posture_Mining_for_Fine-Grained_Classification_CVPR_2023 | Abstract
Because the subtle differences between the different
sub-categories of common visual categories such as bird
species, fine-grained classification has been seen as a chal-
lenging task for many years. Most previous works focus
towards the features in the single discriminative region iso-
latedly, while neglect the connection between the different
discriminative regions in the whole image. However, the
relationship between different discriminative regions con-
tains rich posture information and by adding the posture
information, model can learn the behavior of the object
which attribute to improve the classification performance.
In this paper, we propose a novel fine-grained framework
named PMRC (posture mining and reverse cross-entropy),
which is able to combine with different backbones to good
effect. In PMRC, we use the Deep Navigator to generate the
discriminative regions from the images, and then use them
to construct the graph. We aggregate the graph by mes-
sage passing and get the classification results. Specifically,
in order to force PMRC to learn how to mine the posture
information, we design a novel training paradigm, which
makes the Deep Navigator and message passing communi-
cate and train together. In addition, we propose the reverse
cross-entropy (RCE) and demomenstate that compared to
the cross-entropy (CE), RCE can not only promote the ac-
curracy of our model but also generalize to promote the ac-
curacy of other kinds of fine-grained classification models.
Experimental results on benchmark datasets confirm that
PMRC can achieve state-of-the-art.
| 1. Introduction
Fine-grained clssification tasks have been seen as quite
challenging tasks because the visual differences between
the fine-grained classification datasets are hard to recog-
nize. For ordinary people, we can do the normal classifi-
cation easily, but as for the fine-grained classification, only
experts with professional knowledge can do it. Therefore,
†Equal contribution.∗Corresponding author.
Figure 1. The overview of PMRC. Firstly, we use the Deep Navi-
gator to generate the discriminative regions. Then we construct the
graph. Finally, we aggregate the graph through message passing
and classify the graph.
compared to category-level classification, fine-grained clas-
sification is more chanllenging.
There have been many predecessors on fine-grained clas-
sification. Works in [2,4,7,12,16,25,31,45,48] can achieve
good performance on fine-grained classification. However,
their training and testing phase both need bounding box an-
notations which cost a lot of manual labour and are always
error-prone. Then works in [3, 20] develp the methods and
use the annotations only in training phase. More recent
works develop methods that don’t need bounding box anno-
tations in training phase or testing phase [18, 22, 26, 47]. It
is a general idea to create graph using local regions. How-
ever, related existing method [44, 49] is not easy to trans-
plant, and it is difficult to perceive discriminative regions
with correct context information and the relationship be-
tween regions. We propose a method that can be conve-
niently combined with different backbones, and propose a
novel learning strategy to ensure that the model can per-
ceive the correct discriminative regions and their relation-
ships (posture information). In addition, our RCE is simple
to implement and has better performance than CE.
The framework we propose, which we term PMRC (pos-
ture mining and RCE), use the Deep Navigator and graph
neural network to mine the posture information from the
fine-grained images and use RCE to promote the perfor-
mance. PMRC is able to combine with different backbones
to good effect. We design the loss to make PMRC learn
the way to mine the posture information from the images,
which include guide the Deep Navigator to search the dis-
criminative regions and guide the message passing module
to percept the posture information based on the discrimina-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23735
tive regions. Besides, the reason we use RCE instead of CE
is that although CE can be seen as an appropriate loss func-
tion for the normal classification, in training phase, for each
sample, it focuses on completing the correct classification
as much as possible. In network learning phase, it only con-
centrates on improving the score of positive labels output
by softmax layer, while ignoring the information contained
in negative labels. Because of the characteristics of fine-
grained classification, negative labels which contain sub-
tle inter-class difference information are very significant.
Compared with CE, RCE learns the inter-class difference
information by reversing the label score of softmax output
layer, so that it has a better effect on fine-grained classifica-
tion. Specifically, our PMRC has three main steps (see in
Figure. 1).
The main contributions of this paper are as follows: (1)
We propose a simple framework to mine the posture infor-
mation in fine-grained classification images, our framework
is able to combine easily with different backbones to good
effect. (2) We design a novel learning strategy. For the
posture mining part, the loss of the Deep Navigator and
the loss of message passing communicate with each other
to make the model learn how to mine the posture infor-
mation. For the classification part, we use RCE loss func-
tion which can effectively learn the inter-class differences of
the samples. (3) PMRC can be trained end-to-end without
bounding-box/part annotations. We achieve state-of-the-art
on commonly used benchmark.
|
Solodskikh_Integral_Neural_Networks_CVPR_2023 | Abstract
We introduce a new family of deep neural networks,
where instead of the conventional representation of net-
work layers as N-dimensional weight tensors, we use a
continuous layer representation along the filter and chan-
nel dimensions. We call such networks Integral Neural Net-
works (INNs). In particular, the weights of INNs are rep-
resented as continuous functions defined on N-dimensional
hypercubes, and the discrete transformations of inputs to
the layers are replaced by continuous integration opera-
tions, accordingly. During the inference stage, our con-
tinuous layers can be converted into the traditional tensor
representation via numerical integral quadratures. Such
kind of representation allows the discretization of a net-
work to an arbitrary size with various discretization in-
tervals for the integral kernels. This approach can be ap-
plied to prune the model directly on an edge device while
suffering only a small performance loss at high rates of
structural pruning without any fine-tuning. To evaluate the
practical benefits of our proposed approach, we have con-
ducted experiments using various neural network architec-
tures on multiple tasks. Our reported results show that the
proposed INNs achieve the same performance with their
conventional discrete counterparts, while being able to pre-
serve approximately the same performance (2% accuracy
loss for ResNet18 on Imagenet) at a high rate (up to 30%)
of structural pruning without fine-tuning, compared to 65%
accuracy loss of the conventional pruning methods under
the same conditions. Code is available at gitee.
| 1. Introduction
Recently, deep neural networks (DNNs) have achieved
impressive breakthroughs in a wide range of practical ap-
plications in both computer vision [13, 20, 32] and natural
*The authors contributed equally to this work.
†Currently affiliated with Garch Lab.language processing [7] tasks. This state-of-the-art perfor-
mance is mainly attributed to the huge representation ca-
pacity [2] of DNNs. According to the Kolmogorov su-
perposition theorem [14] and the universal approximation
theorem [29], a DNN is capable of approximating uni-
formly any continuous multivariate function with appropri-
ate weights. To achieve better performance, a large num-
ber of parameters and computations are assigned to the
DNN [10, 40], which seriously limits its application on
memory- and computation-constrained devices. Hence, nu-
merous approaches have been proposed to compress and ac-
celerate neural networks, including pruning [26, 38], quan-
tization [35, 41] and neural architecture search [34, 37].
DNNs are particularly successful in dealing with chal-
lenging transformations of natural signals such as images
or audio signals [27]. Since such analogue signals are in-
evitably discretized, neural networks conventionally per-
form discrete representations and transformations, such as
matrix multiplications and discrete convolutions. However,
for such kind of representations the size of neural networks
cannot be adjusted without suffering severe performance
degradation during the inference stage, once the training
procedure is completed. Although several network prun-
ing methods [26, 38] have been proposed to extract crucial
channels from the trained model and generate efficient mod-
els, they either suffer from a significant accuracy degrada-
tion or require to fine-tune the model on the whole train-
ing database. Along with the development of hardware,
there have been diverse edge devices with various capacities
for memory and computation, from ordinary processors to
dedicated neural network accelerators. The model size for
different devices varies significantly [4]. Moreover, many
tasks (e.g. autonomous driving) require different response
speeds on the same hardware according to various scenar-
ios or conditions (e.g. driving speed and weather condition).
The conventional way to deal with such problems is to de-
sign multiple model architectures for all possible scenarios
and store them together. However, the downside of such
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16113
strategy is that it requires huge resources for training and
memory space for storage. Hence, it is crucial to design
neural networks that feature a self-resizing ability during
inference, while preserving the same level of performance.
Inspired by the inherently continuous nature of the in-
put signals, we challenge the discrete representation of neu-
ral networks by exploring a continuous representation along
the filters and channel dimensions. This leads to a new class
of networks which we refer to as Integral Neural Networks
(INNs). INNs employ the high-dimensional hypercube to
present the weights of one layer as a continuous surface.
Then, we define integral operators analogous to the conven-
tional discrete operators in neural networks. INNs can be
converted into the conventional tensor representation by nu-
merical integration quadratures for the forward pass. At the
inference stage, it is convenient to discretize such networks
into arbitrary size with various discretization intervals of the
integral kernels. Since the representation is composed of in-
tegral operators, discretizing the continuous networks could
be considered as the numerical quadrature approximation
procedure [9]. The estimated values with various discretiza-
tion intervals are close to the integral value when the inter-
val is small enough. Hence, when we discretize an INN with
different intervals to generate networks of various sizes, it
is capable of preserving the original performance to some
extent without the need of additional fine-tuning. Such kind
of representation of neural networks can play a crucial role
in dealing with the important problem of efficient network
deployment in diverse conditions and hardware setups.
To evaluate the performance of INNs, extensive exper-
iments were conducted on image classification and super-
resolution tasks. The results show that the proposed contin-
uous INNs achieve the same performance with their discrete
DNN counterparts, when the training procedure is finished.
Moreover, such kind of networks approximately preserve
the performance at a high rate of structural pruning without
the aid of additional fine-tuning.
|
Sun_Rethinking_Domain_Generalization_for_Face_Anti-Spoofing_Separability_and_Alignment_CVPR_2023 | Abstract
This work studies the generalization issue of face anti-
spoofing (FAS) models on domain gaps, such as image res-
olution, blurriness and sensor variations. Most prior works
regard domain-specific signals as a negative impact, and
apply metric learning or adversarial losses to remove them
from feature representation. Though learning a domain-
invariant feature space is viable for the training data, we
show that the feature shift still exists in an unseen test do-
main, which backfires on the generalizability of the clas-
sifier. In this work, instead of constructing a domain-
invariant feature space, we encourage domain separabil-
ity while aligning the live-to-spoof transition (i.e., the tra-
jectory from live to spoof) to be the same for all domains.
We formulate this FAS strategy of separability and align-
ment (SA-FAS) as a problem of invariant risk minimization
(IRM), and learn domain-variant feature representation but
domain-invariant classifier. We demonstrate the effective-
ness of SA-FAS on challenging cross-domain FAS datasets
and establish state-of-the-art performance. Code is avail-
able at https://github.com/sunyiyou/SAFAS .
| 1. Introduction
Face recognition (FR) [ 16] has achieved remarkable suc-
cess and has been widely employed in mobile access control
and electronic payments. Despite the promise, FR systems
still suffer from presentation attacks (PAs), including print
attacks, digital replay, and 3D masks. As a result, face anti-
spoofing (FAS) has been an important topic for almost two
decades [ 3,35,45,47,66,74,76].
In early systems like building access and border con-
trol with limited variations ( e.g., lighting and poses), sim-
ple methods [ 6,17,41] have exhibited promise. These al-
gorithms are designed for the closed-world setting, where
*This work was done during Yiyou Sun’s internship at Google.
1In statistics, spurious correlation is a mathematical relationship in
which multiple events or variables are associated but not causally related.Common Solution With Mixed Domains
SA-FAS (Ours)(a)
(b)
Decision FactorsLive-to-Spoof TransitionSpurious Correlation!✗
Domain GapsDomain 2Domain 1Domain 2Domain 1LiveSpoof
Domain 1
Domain 2
Domain 2Domain 1Decision FactorsLive-to-Spoof TransitionInvariant to the classifier!
Domain Gaps
LiveSpoof
Figure 1. Cross-domain FAS: (a) Common FAS solutions aim to
remove domain-specific signals and mix domains in one cluster.
However, we empirically show domain-specific signals still exists
in the feature space, and model might pick domain-specific signals
as spurious correlation1for classification. (b) Our SA-FAS aims
to retain domain signal. Specifically, we train a feature space with
two critical properties: (1) Separability : Samples from differ-
ent domains and live/spoof classes are well-separated; (2) Align-
ment : Live-to-spoof transitions are aligned in the same direction
for all domains. With these two properties, our method keeps the
domain-specific signals invariant to the decision boundary.
the camera and environment are assumed to be the same be-
tween train and test. This assumption, however, rarely holds
for in-the-wild applications, e.g., mobile face unlock and
sensor-invariant ID verification. Face images in those FAS
cases may be acquired from wider angles, complex scenes,
and different devices, where it is hard for training data to
cover all the variations. These differences between training
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24563
and test data are termed domain gaps and the FAS solutions
to tackle the domain gaps are termed cross-domain FAS.
Learning domain-invariant representation is the main ap-
proach in generic domain generalization [ 70], and has soon
been widely applied to cross-domain FAS [ 30,43,44,60,67,
72]. Those methods consider domain-specific signals as a
confounding factor for model generalization, and hence aim
to remove domain discrepancy from the feature representa-
tion partially or entirely. Adversarial training is commonly
applied so that upon convergence the domain discriminator
cannot distinguish which domain the features come from.
In addition, some methods apply metric learning to further
regularize the feature space, e.g., triplet loss [ 67], dual-force
triplet loss [ 60], and single-side triplet loss [ 30].
There are two crucial issues that limit the generaliza-
tion ability of these methods [ 30,43,44,60,67,72] with
domain-invariant feature losses. First, these methods posit
a strong assumption that the feature space is perfectly
domain-invariant after removing the domain-specific sig-
nals from training data. However, this assumption is un-
realistic due to the limited size and domain variants of the
training data, on which the loss might easily overfit during
training. As shown in Fig. 7, the test distribution is more
expanded compared to the training one, and the spatial re-
lation between live and spoof has largely deviated from the
learned classifier. Second, feature space becomes ambigu-
ous when domains are mixed together. Note that the domain
can carry information on certain image resolutions, blur-
riness and sensor patterns. If features from different do-
mains are collapsed together [ 54], the live/spoof classifier
will undesirably leverage spurious correlations to make the
live/spoof predictions as shown in Fig. 1(a),e.g., compar-
ing live from low-resolution domains to spoof from high-
resolution ones. Such a classifier will unlikely generalize to
a test domain when the correlation does not exist.
In this work, we rethink feature learning for cross-
domain FAS. Instead of constructing a domain-invariant
feature space, we aim to find a generalized classifier while
explicitly maintaining domain-specific signals in the repre-
sentation. Our strategy can be summarized by the following
two properties:
•Separability: We encourage features from different
domains and live/spoof classes to be separated which
facilitates maintaining the domain signal. According
to [4], representations with well-disentangled domain
variation and task-relevant features are more general
and transferable to different domains.
•Alignment: Inspired by [ 31], we regard spoofing as
the process of transition. For similar PA types2, the
transition process would be similar, regardless of envi-
ronments and sensor variations. With this assumption,
2This work focuses on print and replay attacks.we regularize the live-to-spoof transition to be aligned
in the same direction for all domains.
We refer to this new learning framework as FAS with sepa-
rability and alignment (dubbed SA-FAS ), shown in Fig. 1
(b). To tackle the separability, we leverage Supervised Con-
trastive Learning (SupCon) [ 33] to learn representations
that force samples from the same domain and the same
live/spoof labels to form a compact cluster. To achieve the
alignment, we devise a novel Projected Gradient optimiza-
tion strategy based on Invariant Risk Minimization (PG-
IRM) to regularize the live-to-spoof transition invariant to
the domain variance. With normalization, the feature space
is naturally divided into two symmetric half-spaces: one for
live and one for spoof (see Fig. 6). Domain variations will
manifest inside the half-spaces but have minimal impact to
the live/spoof classifier.
We summarize our contributions as three-fold:
•We offer a new perspective for cross-domain FAS. In-
stead of removing the domain signal, we propose to
maintain it and design the feature space based on sep-
arability and alignment;
•We first systematically exploit the domain-variant rep-
resentation learning by combining contrastive learn-
ing and effectively optimizing invariant risk minimiza-
tion (IRM) through the projected gradient algorithm
for cross-domain FAS;
•We achieve state-of-the-art performance on widely-
used cross-domain FAS benchmark, and provide in-
depth analysis and insights on how separability and
alignment lead to the performance boost.
|
Sun_Event-Based_Frame_Interpolation_With_Ad-Hoc_Deblurring_CVPR_2023 | Abstract
The performance of video frame interpolation is inher-
ently correlated with the ability to handle motion in the in-
put scene. Even though previous works recognize the utility
of asynchronous event information for this task, they ignore
the fact that motion may or may not result in blur in the
input video to be interpolated, depending on the length of
the exposure time of the frames and the speed of the motion,
and assume either that the input video is sharp, restrict-
ing themselves to frame interpolation, or that it is blurry,
including an explicit, separate deblurring stage before in-
terpolation in their pipeline. We instead propose a general
method for event-based frame interpolation that performs
deblurring ad-hoc and thus works both on sharp and blurry
input videos. Our model consists in a bidirectional recur-
rent network that naturally incorporates the temporal di-
mension of interpolation and fuses information from the in-
put frames and the events adaptively based on their tempo-
ral proximity. In addition, we introduce a novel real-world
high-resolution dataset with events and color videos named
HighREV , which provides a challenging evaluation setting
for the examined task. Extensive experiments on the stan-
dard GoPro benchmark and on our dataset show that our
network consistently outperforms previous state-of-the-art
methods on frame interpolation, single image deblurring
and the joint task of interpolation and deblurring. Our code
and dataset are available at https://github.com/
AHupuJR/REFID .
| 1. Introduction
Video frame interpolation (VFI) methods synthesize in-
termediate frames between consecutive input frames, in-
creasing the frame rate of the input video, with wide ap-
plications in super-slow generation [11, 13, 20], video edit-
ing [27,45], virtual reality [1], and video compression [40].
With the absence of inter-frame information, frame-based
methods explicitly or implicitly utilize motion models such
as linear motion [13] or quadratic motion [41]. However,
SharpFrameInterpolation𝑡!,#𝑡!,$𝑡%,#𝑡%,$𝑡!𝑡%BlurryFrameInterpolationFigure 1. Our unified framework for event-based sharp and blurry
frame interpolation. Red/blue dots: negative/positive events;
Curly braces: exposure time range.
the non-linearity of motion in real-world videos makes it
hard to accurately capture inter-frame motion with these
simple models.
Recent works introduce event cameras in VFI as a proxy
to estimate the inter-frame motion between consecutive
frames. Event cameras [7] are bio-inspired asynchronous
sensors that report per-pixel intensity changes, i.e.,events ,
instead of synchronous full intensity images. The events
are recorded at high temporal resolution (in the order of
µs) and high dynamic range (over 140 dB) within and be-
tween frames, providing valid compressed motion informa-
tion. Previous works [9, 36, 37] show the potential of event
cameras in VFI, comparing favorably to frame-only meth-
ods, especially in high-speed non-linear motion scenarios,
by using spatially aligned events and RGB frames. These
event-based VFI methods make the crucial assumption that
the input images are sharp. However, this assumption is
violated in real-world scenes because of the ubiquitous mo-
tion blur. In particular, because of the finite exposure time
of frames in real-world videos, especially of those cap-
tured with event cameras that output both image frames and
an event stream ( i.e., Dynamic and Activate VIsion Sen-
sor (DA VIS) [3])—which have a rather long exposure time
and low frame rate, motion blur is inevitable for high-speed
scenes. In such a scenario, where the reference frames for
VFI are degraded by motion blur, the performance of frame
interpolation also degrades.
As events encode motion information within and be-
tween frames, several studies [4, 18, 22] are carried out on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18043
event-based deblurring in conjunction with VFI. However,
these works approach the problem via cascaded deblurring
and interpolation pipelines and the performance of VFI is
limited by the image deblurring performance.
Thus, the desideratum in event-based VFI is robust per-
formance on both sharp image interpolation and blurry im-
age interpolation. Frame-based methods [12, 17, 17, 21, 30,
46] usually treat these two aspects as separate tasks. Dif-
ferent from frames, events are not subject to motion blur.
No matter whether the frame is sharp or blurry, the corre-
sponding events are the same. Based on this observation,
we propose to unify the two aforementioned tasks into one
problem: given two input images and a corresponding event
stream, restore the latent sharp images at arbitrary times
between the input images. The input images could be ei-
ther blurry or sharp, as Fig. 1 shows. To solve this prob-
lem, we first revisit the physical model of event-based de-
blurring and frame interpolation. Based on this model, we
propose a novel recurrent network, which can perform both
event-based sharp VFI and event-based blurry VFI. The net-
work consists of two branches, an image branch and an
event branch. The recurrent structure pertains to the event
branch, in order to enable the propagation of information
from events across time in both directions. Features from
the image branch are fused into the recurrent event branch
at multiple levels using a novel attention-based module for
event-image fusion, which is based on the squeeze-and-
excitation operation [10].
To test our method on a real-world setting and moti-
vated by the lack of event-based datasets recorded with
high-quality event cameras, we record a dataset, HighREV ,
with high-resolution chromatic image sequences and cor-
responding events. From the sharp image sequences, we
synthesize blurry images by averaging several consecutive
frames [19]. To our knowledge, HighREV has the highest
event resolution among all publicly available event datasets.
In summary, we make the following contributions:
• We propose a framework for solving general event-
based frame interpolation and event-based single im-
age deblurring, which builds on the underlying phys-
ical model of high-frame-rate video frame formation
and event generation.
• We introduce a novel network for solving the above
tasks, which is based on a bi-directional recurrent ar-
chitecture, includes an event-guided channel-level at-
tention fusion module that adaptively attends to fea-
tures from the two input frames according to the tem-
poral proximity with features from the event branch,
and achieves state-of-the-art results on both synthetic
and real-world datasets.
• We present a new real-world high-resolution dataset
with events and RGB videos, which enables real-world
evaluation of event-based interpolation and deblurring. |
Uzkent_Dynamic_Inference_With_Grounding_Based_Vision_and_Language_Models_CVPR_2023 | Abstract
Transformers have been recently utilized for vision and
language tasks successfully. For example, recent image and
language models with more than 200M parameters have
been proposed to learn visual grounding in the pre-training
step and show impressive results on downstream vision and
language tasks. On the other hand, there exists a large
amount of computational redundancy in these large models
which skips their run-time efficiency. To address this prob-
lem, we propose dynamic inference for grounding based vi-
sion and language models conditioned on the input image-
text pair. We first design an approach to dynamically skip
multihead self-attention and feed forward network layers
across two backbones and multimodal network. Addition-
ally, we propose dynamic token pruning and fusion for two
backbones. In particular, we remove redundant tokens at
different levels of the backbones and fuse the image tokens
with the language tokens in an adaptive manner. To learn
policies for dynamic inference, we train agents using rein-
forcement learning. In this direction, we replace the CNN
backbone in a recent grounding-based vision and language
model, MDETR, with a vision transformer and call it ViT-
MDETR. Then, we apply our dynamic inference method
to ViTMDETR, called D-ViTDMETR, and perform experi-
ments on image-language tasks. Our results show that we
can improve the run-time efficiency of the state-of-the-art
models MDETR and GLIP by up to ∼50% on Referring Ex-
pression Comprehension and Segmentation, and VQA with
only maximum ∼0.3%accuracy drop.
| 1. Introduction
Significant progress has been made in the development
of image and language models attributed to: (1) emergence
of transformers for different modalities [6,13], and (2) large
scale pre-training paradigms [4, 14, 17, 24, 29, 43, 44]. In
particular, with the very large scale pre-training of image
and language models, large number of parameters and com-
putations are allocated for processing input image-text pair.
Specifically, the number of parameters of recent vision and
22 32 42 52 62
F rames per second82.883.884.885.886.887.8RefCOCO accuracyMDETR
GLIP
ViTMDETR
D-ViTMDETR
22 32 42 52 62
F rames per second82.883.884.885.886.887.8RefCOCO accuracyMDETR
ViTMDETR
D- ViTMDETR
9 19 29 39 49 59 69
GFLOPs82.883.884.885.886.887.8RefCOCO accuracyMDETR
GLIP
ViTMDETR
D-ViTMDETR
4 9 14 19
GFL OPs80.881.882.883.884.885.8R efCOCO accuracyMDETR
ViTMDETR
D- ViTMDETRFigure 1. Accuracy vs frames per second comparison of the large
(Top Left ) and small ( Top Right ) models and accuracy vs GLOPs
comparison of the large ( Bottom Left ) and small ( Bottom Right )
models. D-ViTMDETR outperforms MDETR, GLIP and our ViT-
MDETR model in both frames per second and GFLOPs metrics
while maintaining high accuracy.
language models can be more than 200M [14, 17, 44], re-
sulting in low run-time efficiency. This problem with the
single-modality transformers has been tackled by several
studies before [27, 30, 37]. Such computational complex-
ity is further amplified in multimodal networks often build-
ing on multiple transformer models. As a result, reduc-
ing the run-time complexity of the multimodal networks
can be very beneficial for the downstream tasks. Exist-
ing methods including pruning [20], knowledge distilla-
tion [10, 33] and quantization [39] can potentially be ex-
tended toward this goal. However, they show significant
performance drop ( ≥1%) at≥50% compression rates
and these methods are mostly designed for parameter re-
duction not for run-time speed. As a result, we propose
dynamic inference with the large image and language mod-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2624
els mainly to achieve two goals: (1) drastically reduce run-
time complexity, and (2) maintain their high accuracy. To-
wards these goals, in the first step, we analyze the clas-
sic architectural choices for the recent image and language
models. A typical image and language model consists of
a vision encoder (a CNN or a vision transformer), a text
encoder (transformer), and a multimodal transformer (fu-
sion network). Inspired by MDETR [14], we build a vi-
sion and language model consisting of vision and language
transformer and DETR-like multimodal network and call it
ViTMDETR. The transformer modules consist of a multi-
head self attention (MSA) and feed forward network (FFN)
blocks which are experimentally found to be computation-
ally expensive modules in inference-time. It is also known
that computationally complexity of transformer MSA mod-
ule goes up quadratically w.r.t number of tokens. The num-
ber of tokens and the computational complexity gets further
amplified with inclusion of multimodal inputs and related
modules.
For these reasons, with our D-ViTMDETR model we
propose to dynamically prune input tokens from multiple
modalities across the transformer backbones and fuse vi-
sion tokens adaptively with the text tokens to improve the
accuracy. This way, we can reduce the complexity quadrat-
ically. Additionally, we adaptively skip the computationally
expensive MSA and FFN layers across the two backbones
and the multimodal network to further improve run-time ef-
ficiency. To learn dynamic policies, we train decision net-
works using the policy-gradients based reinforcement learn-
ing algorithm and distill the knowledge from ViTMDETR
to better optimize D-ViTMDETR.
In this research work, our contributions are as below:
• We introduce an MDETR-inspired transformer-based
model ViTMDETR for grounding based vision and
language tasks.
• We propose a novel method to learn dynamic token
pruning and fusion actions to reduce computational
complexity using reinforcement learning. Addition-
ally, we train the same agents to learn MSA and FFN
layer skipping throughout our vision and language
model to further reduce complexity.
• For better optimization, we align the representations
and predictions of D-ViTMDETR with the representa-
tions and predictions of the ViTMDETR model.
• We perform experiments with both our ViTMDETR
and D-ViTMDETR models on several image and lan-
guage benchmarks for Referring Expression Compre-
hension (REC) and Segmentation (RES), and VQA
tasks. With our dynamic model, D-ViTMDETR, we
can improve the run-time efficiency of the state-of-
the-art models MDETR [14] and GLIP [17] by up to
∼50% on Referring Expression Comprehension andSegmentation, and VQA with only maximum ∼0.3%
accuracy drop as seen in Figure 1.
|
Tao_Weakly_Supervised_Monocular_3D_Object_Detection_Using_Multi-View_Projection_and_CVPR_2023 | Abstract
Monocular 3D object detection has become a main-
stream approach in automatic driving for its easy applica-
tion. A prominent advantage is that it does not need Li-
DAR point clouds during the inference. However, most cur-
rent methods still rely on 3D point cloud data for labeling
the ground truths used in the training phase. This incon-
sistency between the training and inference makes it hard
to utilize the large-scale feedback data and increases the
data collection expenses. To bridge this gap, we propose
a new weakly supervised monocular 3D objection detec-
tion method, which can train the model with only 2D labels
marked on images. To be specific, we explore three types
of consistency in this task, i.e. the projection, multi-view
and direction consistency, and design a weakly-supervised
architecture based on these consistencies. Moreover, we
propose a new 2D direction labeling method in this task
to guide the model for accurate rotation direction predic-
tion. Experiments show that our weakly-supervised method
achieves comparable performance with some fully super-
vised methods. When used as a pre-training method, our
model can significantly outperform the corresponding fully-
supervised baseline with only 1/3 3D labels.
| 1. Introduction
Monocular 3D object detection is a foundational re-
search area in computer vision and plays an important role
in autonomous driving systems. It aims to identify the ob-
jects and estimate the 3D bounding boxes of the correspond-
ing targets with a single image as input. Different from
3D point clouds detection methods like [42, 43, 47, 53, 61],
∗Equal contribution. †Corresponding author: Jianbing Shen . This
work was supported in part by the FDCT grant SKL-IOTSC(UM)-2021-
2023, the FDCT Grant 0123/2022/AFJ, the Grant MYRG-CRG2022-
00013-IOTSC-ICI, and the Grant SRG2022-00023-IOTSC.
Consistent with
2D ground truth
Viewpoint
(a)2D ground truth
common solution of two viewpoints
Region
of projection
(b)Viewpoint 1 Viewpoint 2
Figure 1. Illustration of the projection and multi-view con-
sistency. (a) Only projection consistency cannot determine the
accurate position of the target because projection loss has more
than one optimal solution in the 3D space. For example, the two
dashed boxes in 3D space produce the same projection loss be-
cause they have the same projection in 2D space. (b) Constrained
by the multi-view consistency, the optimal solution must be the
common solution for two viewpoints, that is, the target location.
monocular 3D detection models [28,34,48,49,65] alleviate
the need of LiDAR sensors, making the self-driving system
easier to be applied.
However, there is still a challenging problem that lim-
its the application of 3D object detection with pure cam-
era vision data. That is, the ground truth 3D boxes used
in the training phases are usually labeled with 3D point
clouds [2, 10, 45]. Recently, self-driving systems with pure
camera vision inputs have become a new trend. But the
feedback video clips captured by the production cars can-
not be utilized to improve the 3D object detection mod-
els because of the lack of training labels. Compared with
the data from the data-collection cars, the feedback images
from production cars have a larger scale diversity and con-
tain more corner cases, which are crucial for improving the
robustness of models.
In this paper, we propose a new weakly supervised train-
ing method, which can train the 3D object detection models
with only camera images and 2D labels, making it possible
to utilize the feedback data from the production cars. To
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17482
achieve this goal, we exploit three types of consistency be-
tween the 3D boxes and 2D images and fully utilize them
for training the object detection models. The first is the pro-
jection consistency. With the intrinsic matrix of a camera,
a 3D box predicted by the models can be projected into the
2D image space, and the projected boxes should be con-
sistent with the corresponding 2D boxes. Based on this,
we propose a projection loss by minimizing the difference
between the projected boxes and 2D ground truths. This
criterion can guide the predicted 3D boxes into the projec-
tion regions. However, only projection consistency cannot
provide enough information to correct the errors in the 3D
space, especially for the depth dimension, as shown in Fig. 1
(a). According to the perspective principle, multiple boxes
in the 3D space can be projected into the same 2D box in
the image. Thus, there is more than one optimal solution
for the projection loss, and the errors caused by these boxes
cannot be optimized by the projection loss.
Aiming at solving this limitation, we incorporate multi-
view consistency into our method to minimize the errors
in 3D space. The same object captured from different view-
points would show different positions and shapes in the cor-
responding 2D images. But in the 3D space, 3D bounding
boxes belonging to the same object should be consistent, i.e.
they should be of the same position, size and rotation angle
in a certain coordinate system. Based on this, we construct
the multi-view consistency by minimizing the discrepancy
between the predicted bounding boxes of the same object
from a different point of view. As shown in Fig. 1(b), pro-
jection losses on the two viewpoints will constrain the pre-
dictions into their projection regions, and the multi-view
consistency will further guide the predictions to the com-
mon optimal solutions of the two views, which are where
the objects located. Notably, in our work, images paired
from different viewpoints are only used for calculating the
losses, and the models still only take monocular inputs in
the evaluation phase.
The last consistency presented in this paper is the direc-
tion consistency for guiding the prediction of the direction
scale. In previous works [2] [45] [10], 3D rotation direction
is labeled on point clouds by a vector from the center to the
front of objects. To avoid the need for 3D LiDAR data, we
propose a new labeling method named 2D direction label
directly on pure camera images, indicating the 2D direction
of the object in the images. The predicted 3D box rotation
should be consistent with the direction in 2D space when
they are projected, i.e. the direction consistency. Based on
this consistency, we further design a 2D rotation loss for
optimizing the rotation-scale estimation.
The proposed weakly supervision method is a general
framework that can be integrated with most monocular 3D
detection models. To show the efficiency of our method,
we incorporate it with a representative model - DD3D [34]in this task, and evaluate it on the KITTI benchmark. Re-
sults show that our method can achieve comparable per-
formance with some fully supervised methods. Also, to
demonstrate the application in real scenes, we collect a new
dataset named ProdCars from production cars and evaluate
the performance of our method on it.
In short, in this work, we propose a novel weakly su-
pervised method for monocular 3D object detection, which
only utilizes 2D labels as ground truth without depend-
ing on 3D point clouds for labeling, making us the first to
do so. Our approach incorporates projection consistency
and multi-view consistency, which are used to design two
consistency losses guiding the prediction of accurate 3D
bounding boxes. Additionally, we introduce a new labeling
method called 2D direction label, replacing the 3D rotation
label in point clouds data and a direction consistency loss
based on the new labels. Our experiments show that our
proposed weakly supervised method achieves comparable
performance with some fully supervised methods, and even
with only 1/3 of the ground truth labels, our method out-
performs corresponding fully supervised baselines, demon-
strating its potential for improving models based on feed-
back production data.
|
Sun_Learning_Audio-Visual_Source_Localization_via_False_Negative_Aware_Contrastive_Learning_CVPR_2023 | Abstract
Self-supervised audio-visual source localization aims to
locate sound-source objects in video frames without extra
annotations. Recent methods often approach this goal with
the help of contrastive learning, which assumes only the au-
dio and visual contents from the same video are positive
samples for each other. However, this assumption would
suffer from false negative samples in real-world training.
For example, for an audio sample, treating the frames from
the same audio class as negative samples may mislead the
model and therefore harm the learned representations (e.g.,
the audio of a siren wailing may reasonably correspond to
the ambulances in multiple images). Based on this obser-
vation, we propose a new learning strategy named False
Negative Aware Contrastive ( FNAC ) to mitigate the prob-
lem of misleading the training with such false negative sam-
ples. Specifically, we utilize the intra-modal similarities
to identify potentially similar samples and construct corre-
sponding adjacency matrices to guide contrastive learning.
Further, we propose to strengthen the role of true negative
samples by explicitly leveraging the visual features of sound
sources to facilitate the differentiation of authentic sound-
ing source regions. FNAC achieves state-of-the-art perfor-
mances on Flickr-SoundNet, VGG-Sound, and AVSBench,
which demonstrates the effectiveness of our method in mit-
igating the false negative issue. The code is available at
https://github.com/OpenNLPLab/FNAC_AVL .
| 1. Introduction
When hearing a sound, humans can naturally imagine
the visual appearance of the source objects and locate them
in the scene. This demonstrates that audio-visual corre-
spondence is an important ability for scene understand-
ing. Given that unlimited paired audio-visual data ex-
ists in nature, there is an emerging interest in developing
multi-modal systems with audio-visual understanding abil-
*Indicates equal contribution
Pull'Playing drums'
Pull
PushFigure 1. False negative in audio-visual contrastive learning.
Audio-visual pairs with similar contents are falsely considered as
negative samples to each other and pushed apart in the shared la-
tent space, which we find would affect the model performance.
ity. Various audio-visual tasks have been studied, including
sound source localization [8, 19–21, 26–28], audio-visual
event localization [32, 33, 35, 39], audio-visual video pars-
ing [11, 18, 31] and audio-visual segmentation [37, 38]. In
this work, we focus on unsupervised visual sound source lo-
calization, with the aim of localizing the sound-source ob-
jects in an image using its paired audio clip, but without
relying on any manual annotations.
The essence of unsupervised visual sound source local-
ization is to leverage the co-occurrences between an audio
clip and its corresponding image to extract representations.
A major part of existing methods [8, 19–21, 28] formulates
this task as contrastive learning. For each image sample,
its paired audio clip is viewed as the positive sample, while
all other audio clips are considered as negative. Likewise,
each audio clip considers its paired image as positive and
all others as negative. As such, the Noise Contrastive Es-
timation (NCE) loss [24, 30] is used to perform instance
discrimination by pushing closer the distance between a
positive audio-image pair, while pulling away any nega-
tive pairs. However, the contrastive learning scheme above
suffers from the issue of false negatives during training,
i.e., audio/image samples that belong to the semantically-
matched class but are not regarded as a positive pair (due to
the lack of manual labeling). A typical example is shown
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6420
in Fig. 1. Research shows [4, 16, 29, 36] that these false
negatives will lead to contradictory objectives and harm the
representation learning.
Motivated by this observation, we assess the impact of
false negatives in real-world training. We discover that with
a batch size of 128, around 40% of the samples in VGG-
Sound [9] will encounter at least one false negative sample
during training. We then validate that false negatives indeed
harm performance by artificially increasing the proportion
of false negatives during training, and observing a notice-
able performance drop. To make matters worse, larger batch
sizes are often preferred in contrastive learning [24], but
it may inadvertently increase the number of false negative
samples during training and affect representation quality.
To this end, we propose a false-negative aware audio-
visual contrastive learning framework (FNAC), where we
employ the intra-modal similarities as weak supervision.
Specifically, we compute pair-wise similarities between all
audio clips in a mini-batch without considering the visual to
form an audio intra-modal adjacency matrix. Likewise, in
the visual modality, we obtain an image adjacency matrix.
We found that the adjacency matrices effectively identify
potential samples of the same class within each modality
(Fig. 4). The information can then be used to mitigate the
false negatives and enhance the effect of true pairings.
Specifically, we propose two complementary strategies:
1)FNS for False Negatives Suppression, and 2) TNE for
True Negatives Enhancement. First, when optimizing the
NCE loss, FNS regularizes the inter-modal and intra-modal
similarities. Intrinsically, intra-modal adjacency explores
potential false negatives by the similarity intensities and the
pulling forces applied to these false negatives are canceled
accordingly. Furthermore, we introduce TNE to empha-
size the true negative influences in a region-wise manner,
which in turn reduces the effect of false negative samples
as well. We adopt the audio adjacency matrix to identify
dissimilar samples, i.e., true negatives. Intuitively, dissimi-
lar (true negative) sounds correspond to distinct regions, so
the localized regions across the identified true negatives are
regularized to be different. Such a mechanism encourages
the model to discriminate genuine sound-source regions and
suppress the co-occurring quiet objects. we conduct exten-
sive analysis to demonstrate the effectiveness of our pro-
posed method and report competitive performances across
different settings and datasets. In summary, our main con-
tributions are:
• We investigate the false negative issue in audio-visual
contrastive learning. We quantitatively validate that
this issue occurs and harms the representation quality.
• We exploit intra-modal similarities to identify poten-
tial false negatives and introduce FNS to suppress their
impact.• We propose TNE, which emphasizes true negatives us-
ing different localization results between the identified
true negatives, thus encouraging more discriminative
sound source localizations.
|
Sun_Pose_Synchronization_Under_Multiple_Pair-Wise_Relative_Poses_CVPR_2023 | Abstract
Pose synchronization, which seeks to estimate consistent
absolute poses among a collection of objects from noisy
relative poses estimated between pairs of objects in
isolation, is a fundamental problem in many inverse
applications. This paper studies an extreme setting where
multiple relative pose estimates exist between each object
pair, and the majority is incorrect. Popular methods
that solve pose synchronization via recovering a low-rank
matrix that encodes relative poses in block fail under this
extreme setting. We introduce a three-step algorithm for
pose synchronization under multiple relative pose inputs.
The first step performs diffusion and clustering to compute
the candidate poses of the input objects. We present
a theoretical result to justify our diffusion formulation.
The second step jointly optimizes the best pose for each
object. The final step refines the output of the second step.
Experimental results on benchmark datasets of structure-
from-motion and scan-based geometry reconstruction show
that our approach offers more accurate absolute poses than
state-of-the-art pose synchronization techniques.
| 1. Introduction
Pose synchronization, which seeks to estimate absolute
object poses from noisy relative poses estimated between
object pairs, is a fundamental problem in many inverse
applications in vision and graphics. Examples include
multi-view structure from motion [38], 3D reconstruction
from RGB-D scans [21], and reassembling fractured
objects [14]. This problem has received great process
during the past two decades, starting early greedy
approaches [14, 21] to recent optimization-based
approaches [2, 4, 11, 15, 17, 19, 20, 26–31, 34, 37]. However,
existing approaches assume that there is only one relative
pose for each object pair, and most relative poses are inliers.
This assumption breaks when relative pose estimation is
challenging, e.g., in 3D reconstruction from sparse views.
The correct poses may differ from the top-ranked relative
poses obtained by a pairwise matching method.
In this paper, we study a new pose synchronization
T12T23T14T34S2S4S3S1
Multiple relative pose candidatesGlobal PosesFigure 1. Our approach takes multiple candidate relative poses
between pairs of objects as input and outputs absolute poses of the
input objects for geometry reconstruction.
setting, where there are multiple relative pose estimates
between an object pair, and most of them may be incorrect.
This setting is quite popular, e.g., when overlapping ratios
are low, or objects possess partial symmetries. Our
approach proceeds in three simple steps. The first step
computes for each object a set of candidate poses. This
step is based on the fact that a correct relative pose
between any object and a root object shall be realized by
composing relative poses along multiple paths that connect
them. We introduce an iterative procedure that alternates
between diffusion and clustering to compute candidate
poses. The second step solves a Markov Random Field
(MRF) inference problem to jointly selects the best pose for
each object so that the induced relative pose agrees with
the input relative poses. The third step performs robust
optimization to fine-tune the absolute poses of input objects.
Our approach’s novelty is a diffusion formulation that
synchronizes potentially multiple relative poses between
object pairs into candidate poses for each object. The
formulation, which utilizes a mixture model, is accurate,
robust to noise, and theoretically justified. The resulting
candidate poses enable a simple MRF approach via the
projected power method [36]. Compared to prior MRF
formulations [8,33] that are based on uniform sampling, our
approach does not suffer from discretization errors.
We have evaluated our approach on benchmark
datasets of multi-view structure-from-motion and geometry
reconstruction from depth scans. Experimental results
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13072
show that our approach outperforms state-of-the-art pose
synchronization approaches.
|
Truong_SPARF_Neural_Radiance_Fields_From_Sparse_and_Noisy_Poses_CVPR_2023 | Abstract
Neural Radiance Field (NeRF) has recently emerged as
a powerful representation to synthesize photorealistic novel
views. While showing impressive performance, it relies on
the availability of dense input views with highly accurate
camera poses, thus limiting its application in real-world
scenarios. In this work, we introduce Sparse Pose Adjust-
ing Radiance Field (SPARF), to address the challenge of
novel-view synthesis given only few wide-baseline input im-
ages (as low as 3) with noisy camera poses. Our approach
exploits multi-view geometry constraints in order to jointly
learn the NeRF and refine the camera poses. By relying on
pixel matches extracted between the input views, our multi-
view correspondence objective enforces the optimized scene
and camera poses to converge to a global and geometrically
accurate solution. Our depth consistency loss further en-
courages the reconstructed scene to be consistent from any
viewpoint. Our approach sets a new state of the art in the
sparse-view regime on multiple challenging datasets.
| 1. Introduction
Novel-view synthesis (NVS) has long been one of the
most essential goals in computer vision. It refers to the task
of rendering unseen viewpoints of a scene given a particu-
lar set of input images. NVS has recently gained tremen-
dous popularity, in part due to the success of Neural Radi-
ance Fields (NeRFs) [30]. NeRF encodes 3D scenes with a
multi-layer perceptron (MLP) mapping 3D point locations
to color and volume density and uses volume rendering to
synthesize images. It has demonstrated remarkable abilities
for high-fidelity view synthesis under two conditions: dense
input views and highly accurate camera poses.
Both these requirements however severely impede the
usability of NeRFs in real-world applications. For instance,
in AR/VR or autonomous driving, the input is inevitably
much sparser, with only few images of any particular object
or region available per scene. In such sparse-view scenario,
NeRF rapidly overfits to the input views [11, 22, 32], lead-
This work was conducted during an internship at Google.
Sparse input views Ground-truth camera posesNeRF
BARFSPARF (Ours)
Noisy camera poses
Figure 1. Novel-view rendering from sparse images . We show
the RGB (second row) and depth (last row) renderings from an
unseen viewpoint under sparse settings (3 input views only). Even
with ground-truth camera poses, NeRF [30] overfits to the training
images, leading to degenerate geometry (almost constant depth).
BARF [24], which can successfully handle noisy poses when
dense views are available, struggles in the sparse regime. Our
approach SPARF instead produces realistic novel-view renderings
with accurate geometry, given only 3 input views with noisy poses.
ing to inconsistent reconstructions at best, and degenerate
solutions at worst (Fig. 1 left). Moreover, the de-facto stan-
dard to estimate per-scene poses is to use an off-the-shelf
Structure-from-Motion approach, such as COLMAP [37].
When provided with many input views, COLMAP can gen-
erally estimate accurate camera poses. Its performance nev-
ertheless rapidly degrades when reducing the number of
views, or increasing the baseline between the images [55].
Multiple works focus on improving NeRF’s performance
in the sparse-view setting. One line of research [6,53] trains
conditional neural field models on large-scale datasets. Al-
ternative approaches instead propose various regularization
on color and geometry for per-scene training [11,19,22,32,
34]. Despite showing impressive results in the sparse sce-
nario, all these approaches assume perfect camera poses as
a pre-requisite. Unfortunately, estimating accurate camera
poses for few wide-baseline images is challenging [55] and
has spawned its own research direction [1,7,14–16,28,60],
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4190
hence making this assumption unrealistic.
Recently, multiple approaches attempt to reduce the de-
pendency of NeRFs on highly accurate input camera poses.
They rely on per-image training signals, such as a photomet-
ric [9, 24, 29, 48, 50] or silhouette loss [5, 23, 56], to jointly
optimize the NeRF and the poses. However, in the sparse-
view scenario where the 3D space is under-constrained, we
observe that it is crucial to explicitly exploit the relation
between the different training images and their underlying
scene geometry, to enforce learning a global and geomet-
rically accurate solution . This is not the case of previous
works [5, 23, 24, 48, 50, 56], which hence fail to register the
poses in the sparse regime. As shown in Fig. 1, middle for
BARF [24], it leads to poor novel-view synthesis quality.
We propose Sparse Pose Adjusting Radiance Field
(SPARF), a joint pose-NeRF training strategy. Our ap-
proach produces realistic novel-view renderings given only
few wide-baseline input images (as low as 3) with noisy
camera poses (see Fig. 1 right). Crucially, it does not as-
sume any prior on the scene or object shape. We introduce
novel constraints derived from multi-view geometry [17] to
drive and bound the NeRF-pose optimization. We first in-
fer pixel correspondences relating the input views with a
pre-trained matching model [43]. These pixel matches are
utilized in our multi-view correspondence objective, which
minimizes the re-projection error using the depth rendered
by the NeRF and the current pose estimates. Through the
explicit connection between the training views, the loss en-
forces convergence to a global and geometrically accurate
pose/scene solution, consistent across all training views.
We also propose the depth consistency loss to boost the ren-
dering quality from novel viewpoints. By using the depth
rendered from the training views to create pseudo-ground-
truth depth for unseen viewing directions, it encourages the
reconstructed scene to be consistent from any viewpoint . We
extensively evaluate and compare our approach on the chal-
lenging DTU [20], LLFF [38], and Replica [39] datasets,
setting a new state of the art on all three benchmarks.
|
Song_Optimization-Inspired_Cross-Attention_Transformer_for_Compressive_Sensing_CVPR_2023 | Abstract
By integrating certain optimization solvers with deep
neural networks, deep unfolding network (DUN) with good
interpretability and high performance has attracted grow-
ing attention in compressive sensing (CS). However , exist-ing DUNs often improve the visual quality at the price of alarge number of parameters and have the problem of fea-
ture information loss during iteration. In this paper , wepropose an Optimization-inspired Cross-attention Trans-former (OCT) module as an iterative process, leading toa lightweight OCT -based Unfolding Framework ( OCTUF )
for image CS. Specifically, we design a novel Dual Cross At-tention (Dual-CA) sub-module, which consists of an Inertia-Supplied Cross Attention (ISCA) block and a Projection-Guided Cross Attention (PGCA) block. ISCA block intro-duces multi-channel inertia forces and increases the mem-ory effect by a cross attention mechanism between adja-
cent iterations. And, PGCA block achieves an enhanced
information interaction, which introduces the inertia forceinto the gradient descent step through a cross attentionblock. Extensive CS experiments manifest that our OCTUF
achieves superior performance compared to state-of-the-art
methods while training lower complexity. Codes are avail-able at https://github.com/songjiechong/
OCTUF .
| 1. Introduction
Compressive sensing (CS) is a considerable research in-
terest from signal/image processing communities as a joint
acquisition and reconstruction approach [ 5]. The signal is
first sampled and compressed simultaneously with linearrandom transformations. Then, the original signal can be re-constructed from far fewer measurements than that required
∗Corresponding author . This work was supported in part by Shen-
zhen Research Project under Grant JCYJ20220531093215035 and GrantJSGGZD20220822095800001.Figure 1. The PSNR (dB) performance (y-axis) of our OCTUF
and some recent methods (ISTA-Net [ 54], DPA-Net [ 44], AMP-
Net [ 60], MAC-Net [ 19], COAST [ 53], MADUN [ 41], CASNet
[7], TransCS [ 39], FSOINet [ 10], MR-CCSNet [ 16]) under differ-
ent parameter capacities (x-axis) on Set11 [ 24] dataset in the case
of CS ratio = 25% . Our proposed method outperforms previous
methods while requiring significantly cheaper parameters.
by Nyquist sampling rate [ 29,38]. So, the two main con-
cerns of CS are the design of the sampling matrix [ 7,16]
and recovering the original signal [ 60], and our work fo-
cuses on the latter. Meanwhile, the CS technology achievesgreat success in many image systems, including medicalimaging [ 31,45], single-pixel cameras [ 15,37], wireless
remote monitoring [ 59], and snapshot compressive imag-
ing [ 4,50,51], because it can reduce the measurement and
storage space while maintaining a reasonable reconstruction
of the sparse or compressible signal.
Mathematically, a random linear measurement y∈R
M
can be formulated as y=Φx, wherex∈RNis the origi-
nal signal and Φ∈RM×Nis the measurement matrix with
M/lessmuchN.M
Nis the CS ratio (or sampling rate). Obviously,
CS reconstruction is an ill-posed inverse problem. To ob-tain a reliable reconstruction, the conventional CS methods
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6174
commonly solve an energy function as:
argmin
x1
2/bardblΦx−y/bardbl2
2+λR(x), (1)
where1
2/bardblΦx−y/bardbl2
2denotes the data-fidelity term for mod-
eling the likelihood of degradation and λR(x)denotes the
prior term with regularization parameter λ. For traditional
model-based methods [ 17,20,26,32,56,57,64], the prior
term can be the sparsifying operator corresponding to somepre-defined transform basis, such as discrete cosine trans-form (DCT) and wavelet [ 61,62]. They enjoy the merits of
strong convergence and theoretical analysis in most casesbut are usually limited in high computational complexity
and low adaptivity [ 63]. Recently, fueled by the power-
ful learning capacity of deep networks, several network-
based CS algorithms have been proposed [ 24,44]. Although
network-based methods can solve CS problem adaptively
with fast inferences, the architectures of most of these meth-
ods are the black box design and the advantages of tradi-
tional algorithms are not fully considered [ 36].
More recently, some deep unfolding networks (DUNs)
with good interpretability are proposed to combine network
with optimization and train a truncated unfolding inferencethrough an end-to-end learning manner, which has become
the mainstream for CS [ 52–55,60]. However, existing deep
unfolding algorithms usually achieve excellent performancewith a large number of iterations and a huge number of pa-rameters [ 41,42], which are easily limited by storage space.
Furthermore, the image-level transmission at each iterationfails to make full use of inter-stage feature information.
To address the above problems, in this paper, we
propose an efficient Optimization-inspired Cross-attention
Transformer ( OCT ) module as the iterative process and
establish a lightweight OCT -based Unfolding Framework
(OCTUF ) for image CS, as shown in Fig. 2. Our OCT mod-
ule maintains maximum information flow in feature space,which consists of a Dual Cross Attention (Dual-CA) sub-module and a Feed-Forward Network (FFN) sub-moduleto form each iterative process. Dual-CA sub-module con-tains an Inertia-Supplied Cross Attention (ISCA) block and
a Projection-Guided Cross Attention (PGCA) block. ISCA
block calculates cross attention on adjacent iteration infor-mation and adds inertial/memory effect to the optimization
algorithm. And, PGCA block uses the gradient descent step
and inertial term as inputs of Cross Attention (CA) blockto guide the fine fusion of channel-wise features. With the
proposed techniques, OCTUF outperforms state-of-the-art
CS methods with much fewer parameters, as illustrated inFig. 1. The main contributions are summarized as follows:
• We propose a lightweight deep unfolding frame-
work OCTUF in feature space for CS, where
the optimization-inspired cross-attention Transformer(OCT) module is regarded as an iterative process.• We design a compact Dual Cross Attention (Dual-CA)
sub-module to guide the efficient multi-channel infor-
mation interactions, which consists of a Projection-Guided Cross Attention (PGCA) block and an Inertia-Supplied Cross Attention (ISCA) block.
• Extensive experiments demonstrate that our proposed
OCTUF outperforms existing state-of-the-art methods
with cheaper computational and memory costs.
|
Tu_Visual_Query_Tuning_Towards_Effective_Usage_of_Intermediate_Representations_for_CVPR_2023 | Abstract
Intermediate features of a pre-trained model have been
shown informative for making accurate predictions on
downstream tasks, even if the model backbone is kept frozen.
The key challenge is how to utilize these intermediate fea-
tures given their gigantic amount. We propose visual query
tuning (VQT), a simple yet effective approach to aggregate
intermediate features of Vision Transformers. Through in-
troducing a handful of learnable “query” tokens to each
layer, VQT leverages the inner workings of Transformers
to “summarize” rich intermediate features of each layer,
which can then be used to train the prediction heads of
downstream tasks. As VQT keeps the intermediate features
intact and only learns to combine them, it enjoys memory
efficiency in training, compared to many other parameter-
efficient fine-tuning approaches that learn to adapt features
and need back-propagation through the entire backbone.
This also suggests the complementary role between VQT
and those approaches in transfer learning. Empirically,
VQT consistently surpasses the state-of-the-art approach
that utilizes intermediate features for transfer learning and
outperforms full fine-tuning in many cases. Compared to
parameter-efficient approaches that adapt features, VQT
achieves much higher accuracy under memory constraints.
Most importantly, VQT is compatible with these approaches
to attain even higher accuracy, making it a simple add-
on to further boost transfer learning. Code is available at
https://github.com/andytu28/VQT .
| 1. Introduction
Transfer learning by adapting large pre-trained models to
downstream tasks has been a de facto standard for competi-
tive performance, especially when downstream tasks have
limited data [ 37,59]. Generally speaking, there are two
ways to adapt a pre-trained model [ 15,27]: updating the
model backbone for new feature embeddings (the output
of the penultimate layer) or recombining the existing fea-
*Equal contributions.ture embeddings, which correspond to the two prevalent ap-
proaches, fine-tuning andlinear probing , respectively. Fine-
tuning , or more specifically, full fine-tuning , updates all the
model parameters end-to-end based on the new dataset. Al-
though fine-tuning consistently outperforms linear probing
on various tasks [ 54], it requires running gradient descent
for all parameters and storing a separate fine-tuned model
for each task, making it computationally expensive and pa-
rameter inefficient. These problems become more salient
with Transformer-based models whose parameters grow ex-
ponentially [ 17,26,46]. Alternatively, linear probing only
trains and stores new prediction heads to recombine features
while keeping the backbone frozen. Despite its computa-
tional and parameter efficiency, linear probing is often less
attractive due to its inferior performance.
Several recent works have attempted to overcome such a
dilemma in transfer learning. One representative work is by
Evci et al.[15], who attributed the success of fine-tuning to
leveraging the “intermediate” features of pre-trained models
and proposed to directly allow linear probing to access the
intermediate features. Some other works also demonstrated
the effectiveness of such an approach [ 14,15]. Nevertheless,
given numerous intermediate features in each layer, most of
these methods require pooling to reduce the dimensionality,
which likely would eliminate useful information before the
prediction head can access it.
To better utilize intermediate features, we propose Vi-
sual Query Tuning (VQT) , a simple yet effective approach
to aggregate the intermediate features of Transformer-based
models like Vision Transformers (ViT) [ 13]. A Transformer
usually contains multiple Transformer layers, each starting
with a Multi-head self-attention (MSA) module operating
over the intermediate feature tokens (often >100tokens)
outputted by the previous layer. The MSA module trans-
forms each feature token by querying all the other tokens,
followed by a weighted combination of their features.
Taking such inner workings into account, VQT intro-
duces a handful of learnable “query” tokens to each layer,
which, through the MSA module, can then “summarize” the
intermediate features of the previous layer to reduce the di-
mensionality. The output features of these query tokens af-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7725
ter each layer can then be used by linear probing to make
predictions. Compared to pooling which simply averages
the features over tokens, VQT performs a weighted combi-
nation whose weights are adaptive, conditioned on the fea-
tures and the learned query tokens, and is more likely to
capture useful information for the downstream task.
At first glance, VQT may look superficially similar to
Visual Prompt Tuning (VPT) [ 23], a recent transfer learn-
ing method that also introduces additional learnable tokens
(i.e., prompts) to each layer of Transformers, but they are
fundamentally different in two aspects. First, our VQT only
uses the additional tokens to generate queries, not keys and
values, for the MSA module. Thus, it does not change the
intermediate features of a Transformer at all. In contrast, the
additional tokens in VPT generate queries, keys, and values,
and thus can be queried by other tokens and change their
intermediate features. Second, and more importantly, while
ourVQT leverages the corresponding outputs of the addi-
tional tokens as summarized intermediate features, VPT in
its Deep version disregards such output features entirely. In
other words, these two methods take fundamentally differ-
ent routes to approach transfer learning: VQTlearns to
leverage the existing intermediate features, while VPT aims
to adapt the intermediate features. As will be demonstrated
insection 4 , these two routes have complementary strengths
and can be compatible to further unleash the power of trans-
fer learning. It is worth noting that most of the recent meth-
ods towards parameter-efficient transfer learning (PETL),
such as Prefix Tuning [ 30] and AdaptFormer [ 10], all can
be considered adapting the intermediate features [ 19]. Thus,
the aforementioned complementary strengths still apply.
Besides the difference in how to approach transfer learn-
ing, another difference between VQT and many other PETL
methods, including VPT, is memory usage in training.
While many of them freeze (most of) the backbone model
and only learn to adjust or add some parameters, the fact
that the intermediate features are updated implies the need
of a full back-propagation throughout the backbone, which
is memory-heavy. In contrast, VQT keeps all the intermedi-
ate features intact and only learns to combine them. Learn-
ing the query tokens thus bypasses many paths in the stan-
dard back-propagation, reducing the memory footprint by
76% compared to VPT.
We validate VQT on various downstream visual recog-
nition tasks, using a pre-trained ViT [ 13] as the backbone.
VQT surpasses the SOTA method that utilizes intermedi-
ate features [ 15] and full fine-tuning in most tasks. We fur-
ther demonstrate the robust and mutually beneficial compat-
ibility between VQT and existing PETL approaches using
different pre-trained backbones, including self-supervised
and image-language pre-training. Finally, VQT achieves
much higher accuracy than other PETL methods in a low-
memory regime, suggesting that it is a more memory-Transformer Layer L1Transformer Layer LM
Z0P0Transformer Layer L2...Head
ZmPmWkWqWvV K Q V'K'Q'MSAAdd & NormMLPAdd & Norm
...cls...cls...cls
...cls...............
...(a) VPT: Visual Prompt Tuning (deep version) [ 23]
Transformer Layer L1Transformer Layer LM
Z0P0Transformer Layer L2...Head
ZmPmWkWqWvV K Q Q'MSAAdd & NormMLPAdd & Norm
...cls...cls...cls...cls
..................
Frozen Parameters Tunable Parameters Backward Pass Intermediate Features Forward Pass Unmodified w.r.t Tunable Parameters Modified w.r.t Tunable Parameters (b)Our VQT: Visual Query Tuning
Figure 1. Our Visual Query Tuning (VQT) vs. Visual Prompt
Tuning (VPT) [ 23].OurVQT allows linear probing to directly
access the intermediate features of a frozen Transformer model for
parameter-efficient transfer learning. The newly introduced query
tokens inVQT (marked by the redempty boxes in the redshaded
areas) only append additional columns ( i.e.,Q0) to the Query fea-
tures Q, not to the Value features Vand the Key features K.
Thus, VQT keeps the intermediate features intact ( gray empty
boxes), enabling it to bypass expensive back-propagation steps in
training (hence memory efficient). In contrast, VPT modifies the
intermediate features (gray solid boxes) and needs more memory
to learn its prompts. Please see section 3 for details.
efficient method.
To sum up, our key contributions are
1.We propose VQT to aggregate intermediate features of
Transformers for effective linear probing, featuring pa-
rameter and memory efficient transfer learning.
2.VQT is compatible with other PETL methods that adapt
intermediate features, further boosting the performance.
3.VQT is robust to different pre-training setups, including
self-supervised and image-language pre-training.
7726
|
Takagi_High-Resolution_Image_Reconstruction_With_Latent_Diffusion_Models_From_Human_Brain_CVPR_2023 | Abstract
Reconstructing visual experiences from human brain ac-
tivity offers a unique way to understand how the brain rep-
resents the world, and to interpret the connection between
computer vision models and our visual system. While deep
generative models have recently been employed for this
task, reconstructing realistic images with high semantic fi-
delity is still a challenging problem. Here, we propose a
new method based on a diffusion model (DM) to recon-
struct images from human brain activity obtained via func-
tional magnetic resonance imaging (fMRI). More specifi-
cally, we rely on a latent diffusion model (LDM) termed
Stable Diffusion. This model reduces the computational
cost of DMs, while preserving their high generative perfor-
mance. We also characterize the inner mechanisms of the
LDM by studying how its different components (such as the
latent vector of image Z, conditioning inputs C, and differ-
ent elements of the denoising U-Net) relate to distinct brain
functions. We show that our proposed method can recon-
struct high-resolution images with high fidelity in straight-
* Corresponding authorforward fashion, without the need for any additional train-
ing and fine-tuning of complex deep-learning models. We
also provide a quantitative interpretation of different LDM
components from a neuroscientific perspective. Overall, our
study proposes a promising method for reconstructing im-
ages from human brain activity, and provides a new frame-
work for understanding DMs. Please check out our web-
page at https://sites.google.com/view/stablediffusion-with-
brain/ .
| 1. Introduction
A fundamental goal of computer vision is to construct
artificial systems that see and recognize the world as hu-
man visual systems do. Recent developments in the mea-
surement of population brain activity, combined with ad-
vances in the implementation and design of deep neu-
ral network models, have allowed direct comparisons be-
tween latent representations in biological brains and ar-
chitectural characteristics of artificial networks, providing
important insights into how these systems operate [ 3,8–
10,13,18,19,21,42,43,54,55]. These efforts have in-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14453
cluded the reconstruction of visual experiences (percep-
tion or imagery) from brain activity, and the examination
of potential correspondences between the computational
processes associated with biological and artificial systems
[2,5,7,24,25,27,36,44–46].
Reconstructing visual images from brain activity, such
as that measured by functional Magnetic Resonance Imag-
ing (fMRI), is an intriguing but challenging problem, be-
cause the underlying representations in the brain are largely
unknown, and the sample size typically associated with
brain data is relatively small [ 17,26,30,32]. In recent
years, researchers have started addressing this task using
deep-learning models and algorithms, including generative
adversarial networks (GANs) and self-supervised learning
[2,5,7,24,25,27,36,44–46]. Additionally, more recent
studies have increased semantic fidelity by explicitly using
the semantic content of images as auxiliary inputs for re-
construction [ 5,25]. However, these studies require train-
ing new generative models with fMRI data from scratch, or
fine-tuning toward the specific stimuli used in the fMRI ex-
periment. These efforts have shown impressive but limited
success in pixel-wise and semantic fidelity, partly because
the number of samples in neuroscience is small, and partly
because learning complex generative models poses numer-
ous challenges.
Diffusion models (DMs) [ 11,47,48,53] are deep genera-
tive models that have been gaining attention in recent years.
DMs have achieved state-of-the-art performance in several
tasks involving conditional image generation [ 4,39,49], im-
age super resolution [ 40], image colorization [ 38], and other
related tasks [ 6,16,33,41]. In addition, recently proposed
latent diffusion models (LDMs) [ 37] have further reduced
computational costs by utilizing the latent space generated
by their autoencoding component, enabling more efficient
computations in the training and inference phases. An-
other advantage of LDMs is their ability to generate high-
resolution images with high semantic fidelity. However, be-
cause LDMs have been introduced only recently, we still
lack a satisfactory understanding of their internal mecha-
nisms. Specifically, we still need to discover how they rep-
resent latent signals within each layer of DMs, how the la-
tent representation changes throughout the denoising pro-
cess, and how adding noise affects conditional image gen-
eration.
Here, we attempt to tackle the above challenges by re-
constructing visual images from fMRI signals using an
LDM named Stable Diffusion. This architecture is trained
on a large dataset and carries high text-to-image genera-
tive performance. We show that our simple framework can
reconstruct high-resolution images with high semantic fi-
delity without any training or fine-tuning of complex deep-
learning models. We also provide biological interpretations
of each component of the LDM, including forward/reversediffusion processes, U-Net, and latent representations with
different noise levels.
Our contributions are as follows: (i) We demonstrate
that our simple framework can reconstruct high-resolution
(512⇥512) images from brain activity with high seman-
tic fidelity, without the need for training or fine-tuning of
complex deep generative models (Figure 1); (ii) We quan-
titatively interpret each component of an LDM from a neu-
roscience perspective, by mapping specific components to
distinct brain regions; (iii) We present an objective interpre-
tation of how the text-to-image conversion process imple-
mented by an LDM incorporates the semantic information
expressed by the conditional text, while at the same time
maintaining the appearance of the original image.
|
Sung-Bin_Sound_to_Visual_Scene_Generation_by_Audio-to-Visual_Latent_Alignment_CVPR_2023 | Abstract
How does audio describe the world around us? In this pa-
per, we propose a method for generating an image of a scene
from sound. Our method addresses the challenges of dealing
with the large gaps that often exist between sight and sound.
We design a model that works by scheduling the learning pro-
cedure of each model component to associate audio-visual
modalities despite their information gaps. The key idea is to
enrich the audio features with visual information by learn-
ing to align audio to visual latent space. We translate the
input audio to visual features, then use a pre-trained genera-
tor to produce an image. To further improve the quality of
our generated images, we use sound source localization to
select the audio-visual pairs that have strong cross-modal
correlations. We obtain substantially better results on the
VEGAS and VGGSound datasets than prior approaches. We
also show that we can control our model’s predictions by
applying simple manipulations to the input waveform, or to
the latent space. | 1. Introduction
Humans have the remarkable ability to associate sounds
with visual scenes, such as how chirping birds and rustling
branches bring to mind a lush forest, and the flowing water
conjures the image of a river. These cross-modal associations
convey important information, such as the distance and size
of sound sources, and the presence of out-of-sight objects.
An emerging line of work has sought to create multi-
modal learning systems that have these cross-modal pre-
diction capabilities, by synthesizing visual imagery from
sound [15, 20, 26, 36, 37, 63, 69]. However, these existing
methods come with significant limitations, such as being
limited to simple datasets in which images and sounds are
closely correlated [63, 69], relying on vision-and-language
supervision [36], and being capable only of manipulating
Acknowledgment. This work was supported by IITP grant funded by Korea
government (MSIT) (No.2021-0-02068, Artificial Intelligence Innovation
Hub; No.2022-0-00124, Development of Artificial Intelligence Technology
for Self-Improving Competency-Aware Learning Capabilities). The GPU
resource was supported by the HPC Support Project, MSIT and NIPA.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6430
the style of existing images [37] but not synthesis.
Addressing these limitations requires handling several
challenges. First, there is a significant modality gap between
sight and sound, as sound often lacks information that is
important for image synthesis, e.g., the shape, color, or spa-
tial location of on-screen objects. Second, the correlation
between modalities is often incongruent, e.g., highly contin-
gent or off-sync on timing. Cows, for example, only rarely
moo, so associating images of cows with “moo” sounds re-
quires capturing training examples with the rare moments
when on-screen cows vocalize.
In this work, we propose Sound2Scene, a sound-to-image
generative model and training procedure that addresses these
limitations, and which can be trained solely from unlabeled
videos. First, given an image encoder pre-trained in a self-
supervised way, we train a conditional generative adversarial
network [11] to generate images from the visual features
of the image encoder. We then train an audio encoder to
translate an input sound to its corresponding visual feature,
by aligning the audio to the visual space. Afterwards, we
can generate diverse images from sound by translating from
audio to visual embeddings and synthesizing an image. Since
our model must be capable of learning from challenging in-
the-wild videos, we use sound source localization to select
moments in time that have strong cross-modal associations.
We evaluate our model on VEGAS [73] and VG-
GSound [14], as shown in Fig. 1. Our model can synthesize
a wide variety of different scenes from sound in high quality,
outperforming the prior arts. It also provides an intuitive way
to control the image generation process by applying manipu-
lations at both the input and latent space levels, such as by
mixing multiple audios together or adjusting the loudness.
Our main contributions are summarized as follows:
•Proposing a new sound-to-image generation method that
can generate visually rich images from in-the-wild audio
in a self-supervised way.
•Generating high-quality images from the unrestricted di-
verse categories of input sounds for the first time.
•Demonstrating that the samples generated by our model
can be controlled by intuitive manipulations in the wave-
form space in addition to latent space.
•Showing the effectiveness of training sound-to-image gen-
eration using highly correlated audio-visual pairs.
|
Tan_Language-Guided_Audio-Visual_Source_Separation_via_Trimodal_Consistency_CVPR_2023 | Abstract
We propose a self-supervised approach for learning to
perform audio source separation in videos based on natu-
ral language queries, using only unlabeled video and au-
dio pairs as training data. A key challenge in this task is
learning to associate the linguistic description of a sound-
emitting object to its visual features and the correspond-
ing components of the audio waveform, all without access
to annotations during training. To overcome this chal-
lenge, we adapt off-the-shelf vision-language foundation
models to provide pseudo-target supervision via two novel
loss functions and encourage a stronger alignment between
the audio, visual and natural language modalities. Dur-
ing inference, our approach can separate sounds given text,
video and audio input, or given text and audio input alone.
We demonstrate the effectiveness of our self-supervised ap-
proach on three audio-visual separation datasets, includ-
ing MUSIC, SOLOS and AudioSet, where we outperform
state-of-the-art strongly supervised approaches despite not
using object detectors or text labels during training. Our
project page including publicly available code can be found
at https://cs-people.bu.edu/rxtan/projects/VAST.
| 1. Introduction
Our everyday audiovisual world is composed of many
visible sound sources, often with multiple sources layering
on top of one another. For example, consider the video of
the guitar and cello musicians playing together in Fig. 1.
The two instruments have distinct timbres, and the musi-
cians play non-unison, but complementary melodies. De-
spite hearing both instruments simultaneously, humans have
an innate ability to identify and isolate the melody of a sin-
gle source object. In this paper, we define the corresponding
machine task as follows: given a natural language query that
selects a sounding object, such as “person playing a guitar”,
separate its sound source from the input audio waveform
and localize it in the input video, without any supervision.
This task is challenging. First, there is no approach for
Prior work: strong supervision with object detectors
Ours: self- supervised without object detectors Audio
separation Category label:
“ Guitar ”
Video with detected objects Audio waveform
Separated audio for
the detected object
Video Natural language query :
“person playing a guitar ”
Audio
separation
and
localization Audio waveform
Separated audio and
localized regions
Figure 1. We propose to separate and localize audio sources based
on a natural language query, by learning to align the modalities on
completely unlabeled videos. In comparison, prior audio-visual
sound separation approaches require object label supervision.
associating the linguistic description of a sound-emitting
object to its visual features and the corresponding compo-
nents of the audio waveform without access to annotations
during training. Existing audio-visual methods [5, 14, 41]
do not generalize to natural language queries due to their
dependence on discrete object class labels. Second, an
ideal solution would jointly identify and localize sound-
emitting objects in videos as well as separate the corre-
sponding components in the audio waveform without strong
supervision. Although prior audio-visual work has demon-
strated the benefits of aligning relevant object regions in
the video with their corresponding sounds [5, 14], these ap-
proaches require strong supervision including object label
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10575
and bounding box annotations (see Fig. 1 top). Overcoming
these challenges would enable important downstream appli-
cations including holistic video understanding [33], embod-
ied AI [6], and bidirectional audio-to-video retrieval [38].
To address these challenges, we make the following
contributions. First, we propose Video-Audio Separation
through Text (V AST), a self-supervised approach that lever-
ages large vision-language “foundation” models [20, 26] to
provide pseudo-supervision for learning the alignment be-
tween the three modalities: audio, video and natural lan-
guage. Our key insight is to learn a strong transitive rela-
tion from audio to natural language using vision as an inter-
mediary modality, while preserving the alignment between
the visual and natural language modalities embodied by the
foundation models. However, just using the visual represen-
tations of these foundation models in existing A V separation
approaches does not preserve the transitive relationships be-
tween the three modalities (Sec. 4.1).
Our second contribution introduces two novel multi-
modal alignment objectives that encourage the learnt audio
representations to encode the semantics of captions and in-
fer the latent transitive relation between the three modali-
ties. While natural language can express a large and var-
ied range of visual concepts for audio separation in videos,
the absence of captions in unlabeled videos during training
poses a significant challenge in our self-supervised formu-
lation. To learn the transitive alignment, we adapt a founda-
tion model to extract latent captions from unlabeled videos.
Intuitively, the latent captions are representations that ex-
press the visual concepts present in the videos. Third, we
introduce a Multiple Instance Learning formulation to learn
to perform audio separation at the video region level since
we do not have prior information on relevant objects or their
locations in the videos during training.
Finally, we demonstrate the effectiveness of our pro-
posed V AST approach through extensive evaluations on
the audio source separation task on the SOLOS [24], MU-
SIC [41], and AudioSet [15] datasets. We show that our
self-supervised approach outperforms strongly-supervised
state-of-the-art approaches without using labels during
training by leveraging the capability of vision-language
foundation models. More importantly, we demonstrate that
V AST learns to use language queries for audio separation
despite not training with ground-truth language supervision.
|
Tang_Uncertainty-Aware_Unsupervised_Image_Deblurring_With_Deep_Residual_Prior_CVPR_2023 | Abstract
Non-blind deblurring methods achieve decent perfor-
mance under the accurate blur kernel assumption. Since
the kernel uncertainty (i.e. kernel error) is inevitable in
practice, semi-blind deblurring is suggested to handle it by
introducing the prior of the kernel (or induced) error. How-
ever, how to design a suitable prior for the kernel (or in-
duced) error remains challenging. Hand-crafted prior, in-
corporating domain knowledge, generally performs well but
may lead to poor performance when kernel (or induced) er-
ror is complex. Data-driven prior, which excessively de-
pends on the diversity and abundance of training data, is
vulnerable to out-of-distribution blurs and images. To ad-
dress this challenge, we suggest a dataset-free deep resid-
ual prior for the kernel induced error (termed as residual)
expressed by a customized untrained deep neural network,
which allows us to flexibly adapt to different blurs and im-
ages in real scenarios. By organically integrating the re-
spective strengths of deep priors and hand-crafted priors,
we propose an unsupervised semi-blind deblurring model
which recovers the clear image from the blurry image and
inaccurate blur kernel. To tackle the formulated model, an
efficient alternating minimization algorithm is developed.
Extensive experiments demonstrate the favorable perfor-
mance of the proposed method as compared to model-driven
and data-driven methods in terms of image quality and the
robustness to different types of kernel error.
| 1. Introduction
Image blurring is mainly caused by camera shake [28],
object motion [9], and defocus [42]. By assuming the blur
kernel is shift-invariant, the image blurring can be formu-
lated as the following convolution process:
y=k
x+n; (1)
*Corresponding author
Blurry [8] [34] Ours
PSNR 19.56 PSNR 20.83 PSNR 22.75 PSNR 25.66
True Res. Res. in [8] Res. in [34] Our Res.
MSE 0 MSE 0.075 MSE 0.067 MSE 0.027
Figure 1. Visual comparison of the restored results and esti-
mated residuals by three semi-blind methods based on different
priors for the residual induced by the kernel error, including hand-
crafted prior [8], data-driven prior [34], and the proposed deep
residual prior (DRP). The true residual is the convolution result of
the kernel error and the clear image ( r= k
x). The closer
estimated residual is to the true residual, the better it is.
whereyandxdenote the blurry image and the clear im-
age respectively, krepresents the blur kernel, nrepresents
the additive Gaussian noise, and
is the convolution oper-
ator. To acquire the clear image from the blurry one, image
deblurring has received considerable research attention and
related methods have been developed.
In terms of the availability of kernel, current image de-
blurring methods can be mainly classified into two cate-
gories, i.e., blind deblurring methods in which the blur ker-
nel is assumed to be unknown, and non-blind deblurring
methods in which the blur kernel is assumed to be known
or computed elsewhere. Typical blind deblurring methods
[13, 15, 17, 18, 21, 27, 31, 32, 38, 43] involve two steps:
1) estimating the blur kernel from the blurry images, and 2)
recovering the clear image with the estimated blur kernel.
Recently there also emerge transformer-based [36, 39] and
unfolding networks [19] that learn direct mappings from
blurry image to the deblurred one without using the kernel.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9883
Non-blind deblurring methods [1, 4, 5, 6, 11, 22, 35, 40],
based on various priors for the clear image, estimate the
clear image solely from the blurry image with known blur
kernel. Notably, existing non-blind deblurring methods can
perform well under the error-free kernel assumption. How-
ever, in the real application, uncertainty exists in the kernel
acquisition process. As a result, these methods without han-
dling kernel uncertainty often introduce artifacts and cause
unpleasant performances.
Recently, semi-blind methods are suggested to handle
kernel uncertainty by introducing the prior for the kernel
(or induced) error. In the literature, there are two groups of
priors of kernel (or induced) error, i.e., hand-crafted priors
and data-driven priors. Hand-crafted priors [8, 41], incorpo-
rating domain knowledge, generally perform well but may
lead to poor performance when the distribution of kernel (or
induced) error is complex. For example, hand-crafted priors
(e.g., sparse prior [8]) are relatively impotent to characterize
the complex intrinsic structure of the kernel induced error;
see Figure 1. Data-driven priors [20, 26, 34], which ex-
cessively depend on the diversity and abundance of training
data, are vulnerable to out-of-distribution blurs and images.
Specifically, the data-driven prior in [34] that is expressed
by a trained network introduces artifacts around the sharp
edges; see Figure 1. Therefore, how to design a suitable
prior for the kernel (or induced) error remains challenging.
To address this problem, we suggest a dataset-free deep
prior called deep residual prior (DRP) for the kernel induced
error (termed as residual), which leverages the strong rep-
resentation ability of deep neural networks. Specifically,
DRP is expressed by an untrained customized deep neu-
ral network. Moreover, by leveraging the general domain
knowledge, we use the sparse prior to guide DRP to form
a semi-blind deblurring model. This model organically in-
tegrates the respective strengths of deep priors and hand-
crafted priors to achieve favorable performance. To the best
of our knowledge, we are the first to introduce the untrained
network to capture the kernel induced error in semi-blind
problems, which is a featured contribution of our work.
In summary, our contributions are mainly three-fold:
For the residual induced by the kernel uncertainty, we
elaborately design a dataset-free DRP, which allows us to
faithfully capture the complex residual in real-world appli-
cations as compared to hand-crafted priors and data-driven
priors.
Empowered by the deep residual prior, we suggest an
unsupervised semi-blind deblurring model by synergizing
the respective strengths of dataset-free deep prior and hand-
crafted prior, which work togerther to deliver promising re-
sults.
Extensive experiments on different blurs and images sus-
tain the favorable performance of our method, especially for
the robustness to kernel error. |
Tian_Robot_Structure_Prior_Guided_Temporal_Attention_for_Camera-to-Robot_Pose_Estimation_CVPR_2023 | Abstract
In this work, we tackle the problem of online camera-to-
robot pose estimation from single-view successive frames
of an image sequence, a crucial task for robots to inter-
act with the world. The primary obstacles of this task are
the robot’s self-occlusions and the ambiguity of single-view
images. This work demonstrates, for the first time, the ef-
fectiveness of temporal information and the robot structure
prior in addressing these challenges. Given the succes-
sive frames and the robot joint configuration, our method
learns to accurately regress the 2D coordinates of the pre-
defined robot’s keypoints (e.g. joints). With the camera in-
trinsic and robotic joints status known, we get the camera-
to-robot pose using a Perspective-n-point (PnP) solver. We
further improve the camera-to-robot pose iteratively using
the robot structure prior. To train the whole pipeline, we
build a large-scale synthetic dataset generated with do-
main randomisation to bridge the sim-to-real gap. The ex-
tensive experiments on synthetic and real-world datasets
and the downstream robotic grasping task demonstrate that
our method achieves new state-of-the-art performances and
outperforms traditional hand-eye calibration algorithms in
real-time (36 FPS). Code and data are available at the
project page: https://sites.google.com/view/sgtapose.
| 1. Introduction
Camera-to-robot pose estimation is a crucial task in
determining the rigid transformation between the camera
space and robot base space in terms of rotation and trans-
lation. Accurate estimation of this transformation enables
robots to perform downstream tasks autonomously, such as
grasping, manipulation, and interaction. Classic camera-
to-robot estimation approaches, e.g.[11, 14, 33], typically
involve attaching augmented reality (AR) tags as mark-
ers to the end-effector and directly solving a homogeneous
matrix equation to calculate the transformation. However,
these approaches have critical drawbacks. Capturing mul-
tiple joint configurations and corresponding images is al-
*: Equal contributions. †: Corresponding author
𝑻𝒄𝒓
Image sequence
Robot structure prior
Camera to robot pose Online Camera -to-Robot Pose Estimation
PnPSGTAPose
Keypoint belief mapsDownstream Tasks
Grasping Pose𝑻𝒐𝒄𝑻𝒄𝒓Figure 1. Overview of the proposed SGTAPose. Given a tem-
poral sequence of RGB frames and known robot structure priors,
our method estimates the 2D keypoints ( e.g., joints) of the robot
and performs real-time estimation of the camera-to-robot pose by
combining a Perspective-n-point (PnP) solver(left). This real-time
camera-to-robot pose estimation approach can be utilised for vari-
ous downstream tasks, such as robotic grasping(right).
ways troublesome, and these methods cannot be used on-
line. These flaws become greatly amplified when down-
stream tasks require frequent camera position adjustment.
To mitigate this limitation of classic offline hand-eye cal-
ibration, some recent works [19, 20] introduce vision-based
methods to estimate the camera-to-robot pose from a single
image, opening the possibility of online hand-eye calibra-
tion. Such approaches significantly grant mobile and itin-
erant autonomous systems the ability to interact with other
robots using only visual information in unstructured envi-
ronments, especially in collaborative robotics [21].
Most existing learning-based camera-to-robot pose esti-
mation works [19, 21, 26, 30] focus on single-frame estima-
tion. However, due to the ambiguity of the single-view im-
age, these methods do not perform well when the robotic
arm is self-occluded. Since the camera-to-robot pose is
likely invariant during a video sequence and the keypoints
are moving continually, one way to tackle this problem is
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8917
to introduce temporal information. However, a crucial tech-
nical challenge of estimating camera-to-robot pose tempo-
rally is how to fuse temporal information efficiently. To
this end, as shown in Fig. 1, we propose Structure Prior
Guided Temporal Attention for Camera-to-Robot Pose es-
timation (SGTAPose) from successive frames of an image
sequence. First, we proposed robot structure priors guided
feature alignment approach to align the temporal features in
two successive frames. Moreover, we apply a multi-head-
cross-attention module to enhance the fusion of features in
sequential images. Then, after a decoder layer, we solve
an initial camera-to-robot pose from the 2D projections of
detected keypoints and their 3D positions via a PnP solver.
We lastly reuse the structure priors as an explicit constraint
to acquire a refined camera-to-robot pose.
By harnessing the temporal information and the robot
structure priors, our proposed method gains significant per-
formance improvement in the accuracy of camera-to-robot
pose estimation and is more robust to robot self-occlusion.
We have surpassed previous online camera calibration ap-
proaches in synthetic and real-world datasets and show a
strong dominance in minimising calibration error compared
with traditional hand-eye calibration, where our method
could reach the level of 5mm calibration errors via multi-
frame PnP solving. Finally, to test our method’s capability
in real-world experiments, we directly apply our predicted
pose to help implement grasping tasks. We have achieved a
fast prediction speed (36FPS) and a high grasping success
rate. Our contributions are summarised as follows:
• For the first time, we demonstrate the remarkable
performance of camera-to-robot pose estimation from
successive frames of a single-view image sequence.
• We propose a temporal cross-attention strategy absorb-
ing robot structure priors to efficiently fuse successive
frames’ features to estimate camera-to-robot pose.
• We demonstrate our method’s capability of imple-
menting downstream online grasping tasks in the real
world with high accuracy and stability, even beyond
the performance of classical hand-eye calibration.
|
Song_Unsupervised_Deep_Asymmetric_Stereo_Matching_With_Spatially-Adaptive_Self-Similarity_CVPR_2023 | Abstract
Unsupervised stereo matching has received a lot of atten-
tion since it enables the learning of disparity estimation
without ground-truth data. However, most of the un-
supervised stereo matching algorithms assume that the
left and right images have consistent visual properties,
i.e., symmetric, and easily fail when the stereo images
are asymmetric. In this paper, we present a novel
spatially-adaptive self-similarity (SASS) for unsupervised
asymmetric stereo matching. It extends the concept of
self-similarity and generates deep features that are robust
to the asymmetries. The sampling patterns to calculate
self-similarities are adaptively generated throughout the
image regions to effectively encode diverse patterns. In
order to learn the effective sampling patterns, we design
a contrastive similarity loss with positive and negative
weights. Consequently, SASS is further encouraged to
encode asymmetry-agnostic features, while maintaining
the distinctiveness for stereo correspondence. We present
extensive experimental results including ablation studies
and comparisons with different methods, demonstrating
effectiveness of the proposed method under resolution and
noise asymmetries.
| 1. Introduction
Scene depth is an indispensable information in computer
vision, as it can benefit numerous subsequent applications
including scene recognition [5, 18], 3D scene reconstruc-
tion [22], and autonomous driving [17]. Stereo matching,
which aims to find disparities of corresponding points in
rectified left and right (stereo) images, has been widely ex-
plored since the disparity can directly converted to depth
with camera calibration parameters. Recent advent of large-
This research was supported by the National Research Founda-
tion of Korea (NRF) grant funded by the Korea government (MSIP)
(NRF2021R1A2C2006703). The work of S. Kim was supported by the
National Research Foundation of Korea(NRF) grant funded by the Korea
government (MSIP) (NRF-2021R1C1C2005202). (Corresponding author:
Kwanghoon Sohn.)scale datasets and advanced hardware led the researchers
to solve stereo matching with Convolutional Neural Net-
works (CNNs). It resulted in a number of CNN-based
stereo matching algorithms that are learned in both super-
vised [1, 16, 19] and unsupervised manner [3, 25]. Even
though the recent methods have achieved significant gain
in both accuracy and speed, the existing algorithms assume
that the stereo images are symmetric , where the stereo im-
ages have consistent visual properties in terms of bright-
ness, resolution, noise level, modality, etc.
Recently, multi-camera systems have become more com-
mon, such as RGB-NIR cameras in Kinect, and tele-wide
cameras in smartphones. Such systems usually consist of
different sensors, resulting in asymmetric stereo images,
i.e., the stereo images with different visual properties. The
asymmetric images are embedded into inconsistent features
and make it difficult to accurately calculate the cost volume.
Furthermore, the most widely adopted assumption for un-
supervised stereo matching, photometric consistency, is in-
valid for the corresponding points in the asymmetric stereo
images [2]. Consequently, the widely-used stereo matching
methods assuming symmetric images [1,3,19] easily fail in
the asymmetric scenario [15].
There have been relatively less efforts to handle stereo
matching under asymmetries such as visual quality [2, 15]
and spectrum [23, 31]. Several methods adopt supervised
[15], or proxy-supervised [23] paradigm to solve the deep
asymmetric stereo matching. However, such methods re-
quire additional active depth [15] or image [23] sensor to ac-
quire the training label, which makes it difficult to construct
the training data. In order to tackle the problem and learn
the asymmetric stereo matching in an unsupervised manner,
a few methods adopt feature consistency loss [2, 24]. On
the other hand, several spectral-asymmetric stereo match-
ing methods use unpaired image-to-image translation [33]
algorithm to project the images into a same spectrum, fol-
lowed by photometric consistency loss [14,31]. A common
approach in the unsupervised asymmetric stereo matching
methods is to transfer the images into a shared space to ex-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13672
(a)
(b)
Figure 1. Self-similarity sampling patterns of (a) FCSS [10] and
(b) the proposed SASS. For different pixels indicated with red cir-
cles, the sampling patterns are represented with squares, connected
with dashed lines. FCSS has equivalent patterns for all pixels,
while the proposed SASS generates adaptive patterns.
ploit the consistency constraint as training loss. The im-
portance of consistent space in loss calculation for unsuper-
vised stereo matching is further emphasized in [2].
There have been a number of researches to extract im-
age features that are robust to different types of varia-
tions. In [21], Local Self-Similarity (LSS) descriptor has
been presented based on an observation that local inter-
nal layout of self-similarity is less sensitive to photomet-
ric differences. It has demonstrated impressive robust-
ness against large modality differences, and various deriva-
tions based on self-similarity have been formulated in hand-
crafted [11,12] and deep-learning [10] frameworks, demon-
strating effectiveness in cross-modal visual [11, 12] and se-
mantic [10] correspondences. In [10,11], in order to design
self-similarity based descriptors with improved robustness
and efficiency, sampling patterns are learned throughout the
data. However, the learned sampling patterns are fixed for
all regions as in Fig. 1(a), limiting the capability to encode
robust features of varying geometries across the images.
In this paper, we present a novel Spatially-Adaptive
Self-Similarity (SASS) for unsupervised stereo matching in
asymmetric scenario. Motivated by the importance of the
symmetry in loss calculation [2], we design a novel frame-
work to extract asymmetry-agnostic features. We take ad-
vantage of self-similarity [21] which is robustness to do-
main discrepancy, and further extend it by adaptively gen-
erating the sampling patterns across the spatial locations, as
illustrated in Fig. 1(b). It enables to extract asymmetry-
agnostic features from the asymmetric stereo images to cal-
culate the stereo matching loss in a symmetric space. In
addition, we design a contrastive similarity loss with ad-
ditional positive and negative weights to further encourage
the asymmetry-agnostic property of the SASS, while pre-
serving the discriminative capability.
The main contributions of this paper are summarized as:
We propose a novel Spatially-Adaptive Self-Similarity(SASS) to adaptively encode asymmetry-agnostic fea-
tures for unsupervised asymmetric stereo matching.
The features are used to calculate the unsupervised
stereo matching loss based on view consistency.
We design a contrastive similarity loss with a novel
positive and negative weighting strategy to further en-
hance the asymmetry-agnostic property while main-
taining the discriminative capability of SASS.
Extensive experimental results including ablation stud-
ies and comparisons with different methods demon-
strate the effectiveness of the proposed method on res-
olution and noise asymmetries.
The rest of this paper is organized as follows: In Sec. 2, we
present previous works that are related to ours. Sec. 3 ex-
plains the background and details of the proposed method.
Experimental results are given in Sec. 4, followed by con-
clusion and future works in Sec. 5.
|
Tang_Fair_Scratch_Tickets_Finding_Fair_Sparse_Networks_Without_Weight_Training_CVPR_2023 | Abstract
Recent studies suggest that computer vision models come
at the risk of compromising fairness. There are exten-
sive works to alleviate unfairness in computer vision usingpre-processing, in-processing, and post-processing meth-
ods. In this paper , we lead a novel fairness-aware learning
paradigm for in-processing methods through the lens of thelottery ticket hypothesis (LTH) in the context of computervision fairness. We randomly initialize a dense neural net-
work and find appropriate binary masks for the weights to
obtain fair sparse subnetworks without any weight training.Interestingly, to the best of our knowledge, we are the first
to discover that such sparse subnetworks with inborn fair-
ness exist in randomly initialized networks, achieving an
accuracy-fairness trade-off comparable to that of dense
neural networks trained with existing fairness-aware in-
processing approaches. We term these fair subnetworks
as Fair Scratch Tickets (FSTs). We also theoretically pro-vide fairness and accuracy guarantees for them. In our
experiments, we investigate the existence of FSTs on var-
ious datasets, target attributes, random initialization meth-ods, sparsity patterns, and fairness surrogates. We also findthat FSTs can transfer across datasets and investigate otherproperties of FSTs.
| 1. Introduction
In recent years, deep neural networks (DNN) has become
one of the core technologies in computer vision (CV). How-
ever, it has been observed that CV models learn spuriousage, gender, and race correlations when trained for seem-
ingly unrelated tasks [ 7,67]. There are growing appeals
for fairness-aware learning [ 58]. A model should not dis-
criminate against any demographic group with sensitive at-tributes [ 3,15,60,63,76].
†Equal Contribution.∗Corresponding author.Extensive work has been done to alleviate unfairness
in CV using pre-processing [ 37,54,64,66], in-processing
[5,6,12,57], and post-processing methods [ 39,74]. Only
in-processing approaches can optimize notions of fairnessduring model training. Such methods have direct con-
trol over the optimization function of the model [ 8] and
have attracted great attention in the research community.
Popular in-processing ideas include fairness regularization[5,12,13,33,49,52,57,69] and fairness-aware adversarial
training [ 6,19,44,72]. Fairness regularization is to intro-
duce regularization terms to penalize unfairness. Fairness-aware adversarial training uses an adversary to predict the
sensitive attribute and enforces the main classifier to pre-
vent the adversary from predicting successfully. However,most in-processing methods leverage deep and dense neuralnetworks so that they are computationally intensive during
the inference phase [ 28]. But model compression methods
which scale down overparameterized models will introduce
or exacerbate unfairness [ 34,35,63].
In this paper, to fill the research gap, we raise an intrigu-
ing and challenging question: Is there a learning paradigm
without weight training that is plug-and-play for bias mit-
igation approaches in computer vision ? Intuitively, the re-
cently proposed Lottery Ticket Hypothesis (LTH) [ 20]i s
a natural fit for our needs. LTH focuses on finding sparse
trainable subnetworks (winning tickets) that reach test accu-
racy comparable to the original dense neural network. The
primal training method in [ 20] is iteratively pruning and re-
training the neural network. Interestingly, some researchers
empirically discover that winning tickets can be found with-
out weight training [ 53,75], which is theoretically validated
in [14,45,48,50]. Both empirical observations and theoret-
ical results have verified the feasibility of finding winning
tickets without training the weights of the neural networks.
Motivated by the above, we break down the original ques-tion into three sub-questions instead:
• Q1: Is there a fair winning ticket?
• Q2: How can we find it without weight training?
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24406
• Q3: Is it easy to generalize on various datasets, tar-
get attributes, random initialization methods, sparsitypatterns and fairness surrogates?
For the first question , Proposition 1states that a suf-
ficiently over-parameterized neural network with randomweights contains a subnetwork that can approximate any
target neural network with high probability under some con-
ditions. Furthermore, our Theorem 1shows that if we suc-
cessfully find a sparse neural network that approximates a
fair and accurate neural network well, then the sparse neu-
ral network is also fair and accurate. Combining the resultsof Proposition 1and Theorem 1, they answer our first ques-
tion by clarifying the possibility of finding fair and accurate
winning tickets without any weight training. To our bestknowledge, LTH remains poorly understood in the contextof fairness. For the second question , note that the proof
of Theorem 2.1 in [ 45] follows a constructive routine for
masking. Therefore, it sheds light on the feasibility of find-
ing fair winning tickets without any weight training by de-signing an appropriate masking scheme, and that is exactly
what we do. We randomly initialize a DNN and searchfor masks to iteratively find Fair Scratch Tickets (FSTs).
In particular, following [ 53], we search for the best bi-
nary masks by optimizing a continuously updated learnable
score for each weight. For the third question , to verify
the generality of FST, we demonstrate its effectiveness in
two famous types of in-processing approaches in CV fair-ness: fairness regularization [ 5] and fairness-aware adver-
sarial training [ 72]. Extensive experiments verify the exis-
tence of FSTs on various datasets, target attributes, random
initialization methods, sparsity patterns and fairness surro-gates. We further show the properties of fine-tuning andtransferability of FSTs.
Overall, our contributions are threefold:
• We theoretically and empirically confirm the existence
ofwinning tickets with inborn fairness . And we extend
the application scenario of LTH to CV fairness.
• We propose a brand new plug-and-play learning
paradigm that does not require weight training for the
CV fairness community.
• Extensive experiments verify the existence of FSTs
on various datasets, target attributes, random initial-ization methods, sparsity patterns and fairness surro-gates. Furthermore, we show the properties of fine-
tuning and transferability of FSTs.
|
Sun_Next3D_Generative_Neural_Texture_Rasterization_for_3D-Aware_Head_Avatars_CVPR_2023 | Abstract
3D-aware generative adversarial networks (GANs) syn-
thesize high-fidelity and multi-view-consistent facial images
using only collections of single-view 2D imagery. Towards
fine-grained control over facial attributes, recent efforts
incorporate 3D Morphable Face Model (3DMM) to de-
scribe deformation in generative radiance fields either ex-
plicitly or implicitly. Explicit methods provide fine-grained
expression control but cannot handle topological changes
caused by hair and accessories, while implicit ones can
model varied topologies but have limited generalization
caused by the unconstrained deformation fields. We pro-
pose a novel 3D GAN framework for unsupervised learn-
ing of generative, high-quality and 3D-consistent facial
avatars from unstructured 2D images. To achieve both de-
formation accuracy and topological flexibility, we propose
a 3D representation called Generative Texture-Rasterized
Tri-planes. The proposed representation learns Genera-
tive Neural Textures on top of parametric mesh templatesand then projects them into three orthogonal-viewed feature
planes through rasterization, forming a tri-plane feature
representation for volume rendering. In this way, we com-
bine both fine-grained expression control of mesh-guided
explicit deformation and the flexibility of implicit volumet-
ric representation. We further propose specific modules for
modeling mouth interior which is not taken into account
by 3DMM. Our method demonstrates state-of-the-art 3D-
aware synthesis quality and animation ability through ex-
tensive experiments. Furthermore, serving as 3D prior, our
animatable 3D representation boosts multiple applications
including one-shot facial avatars and 3D-aware styliza-
tion. Project page: https://mrtornado24.github.io/Next3D/.
Code: https://github.com/MrTornado24/Next3D.
| 1. Introduction
Animatable portrait synthesis is essential for movie post-
production, visual effects, augmented reality (AR), and vir-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20991
tual reality (VR) telepresence applications. Efficient ani-
matable portrait generators should be capable of synthesiz-
ing diverse high-fidelity portraits with full control of the
rigid head pose, facial expressions and gaze directions at
a fine-grained level. The main challenges of this task lie
in how to model accurate deformation and preserve iden-
tity through animation in the generative setting, i.e. training
with only unstructured corpus of 2D images.
Several 2D generative models perform image anima-
tion by incorporating the 3D Morphable Face Models
(3DMM) [4] into the portrait synthesis [13, 16, 35, 52, 62,
65, 70, 73]. These 2D-based methods achieve photorealism
but suffer from shape distortion during large motion due to
a lack of geometry constraints. Towards better view con-
sistency, many recent efforts incorporate 3DMM with 3D
GANs, learning to synthesize animatable and 3D consis-
tent portraits from only 2D image collections in an unsu-
pervised manner [3, 30, 39, 44, 60, 61, 68, 74]. Bergman et
al. [3] propose an explicit surface-driven deformation field
for warping radiance fields. While modeling accurate facial
deformation, it cannot handle topological changes caused
by non-facial components, e.g. hair, glasses, and other
accessories. AnifaceGAN [68] builds an implicit 3DMM-
conditioned deformation field and constrains animation ac-
curacy by imitation learning. It achieves smooth animation
on interpolated expressions, however, struggles to generate
reasonable extrapolation due to the under-constrained de-
formation field. Therefore, The key challenge of this task is
modeling deformation in the 3D generative setting for ani-
mation accuracy and topological flexibility.
In this paper, we propose a novel 3D GAN framework
for unsupervised learning of generative, high-quality, and
3D-consistent facial avatars from unstructured 2D images.
Our model splits the whole head into dynamic and static
parts, and models them respectively. For dynamic parts, the
key insight is to combine both fine-grained expression con-
trol of mesh-guided explicit deformation and flexibility of
implicit volumetric representation. To this end, we propose
a novel representation, Generative Texture-Rasterized Tri-
planes , which learns the facial deformation through Gen-
erative Neural Textures on top of a parametric template
mesh and samples them into three orthogonal-viewed and
axis-aligned feature planes through standard rasterization,
forming a tri-plane feature representation. Such texture-
rasterized tri-planes re-form high-dimensional dynamic sur-
face features in a volumetric representation for efficient vol-
ume rendering and thus inherit both the accurate control of
the mesh-driven deformation and the expressiveness of vol-
umetric representations. Furthermore, we represent static
components (body, hair, background, etc.) by another tri-
plane branch, and integrate both through alpha blending.
Another key insight of our method is to model the mouth
interior which is not taken into account by 3DMM. Mouthinterior is crucial for animation quality but often ignored
by prior arts. We propose an efficient teeth synthesis mod-
ule, formed as a style-modulated UNet, to complete the in-
ner mouth features missed by the template mesh. To fur-
ther regularize the deformation accuracy, we introduce a
deformation-aware discriminator which takes as input syn-
thetic renderings, encouraging the alignment of the final
outputs with the 2D projection of the expected deformation.
To summarize, the contributions of our approach are:
• We present an animatable 3D-aware GAN framework
for photorealistic portrait synthesis with fine-grained
animation, including expressions, eye blinks, gaze di-
rection and full head poses.
• We propose Generative Texture-Rasterized Triplanes ,
an efficient deformable 3D representation that inherits
both fine-grained expression control of mesh-guided
explicit deformation and flexibility of implicit volu-
metric representation.
• Our learned generative animatable 3D representation
can serve as a strong 3D prior and boost the down-
stream application of 3D-aware one-shot facial avatars.
Our model also pushes the frontier of 3D stylization
with high-quality out-of-domain facial avatars.
|
Tang_3D_Human_Pose_Estimation_With_Spatio-Temporal_Criss-Cross_Attention_CVPR_2023 | Abstract
Recent transformer-based solutions have shown great
success in 3D human pose estimation. Nevertheless, to cal-
culate the joint-to-joint affinity matrix, the computational
cost has a quadratic growth with the increasing number
of joints. Such drawback becomes even worse especially
for pose estimation in a video sequence, which necessitates
spatio-temporal correlation spanning over the entire video.
In this paper, we facilitate the issue by decomposing cor-
relation learning into space and time, and present a novel
Spatio-Temporal Criss-cross attention (STC) block. Tech-
nically, STC first slices its input feature into two partitions
evenly along the channel dimension, followed by perform-
ing spatial and temporal attention respectively on each par-
tition. STC then models the interactions between joints in an
identical frame and joints in an identical trajectory simulta-
neously by concatenating the outputs from attention layers.
On this basis, we devise STCFormer by stacking multiple
STC blocks and further integrate a new Structure-enhanced
Positional Embedding (SPE) into STCFormer to take the
structure of human body into consideration. The embedding
function consists of two components: spatio-temporal con-
volution around neighboring joints to capture local struc-
ture, and part-aware embedding to indicate which part each
joint belongs to. Extensive experiments are conducted on
Human3.6M and MPI-INF-3DHP benchmarks, and supe-
rior results are reported when comparing to the state-of-
the-art approaches. More remarkably, STCFormer achieves
to-date the best published performance: 40.5mm P1 error
on the challenging Human3.6M dataset.
| 1. Introduction
3D human pose estimation has attracted intensive re-
search attention in CV community due to its great poten-
*This work is supported by the National Natural Science Foundation of
China under Grants 61932009.
ST C
Spatio -Temporal
AttentionMLP
Temporal
AttentionMLPSpatial
AttentionMLP
MLP CSpatial
Attention
Temporal
AttentionST C
ST C
Receptive Field
(a)
(b)
(c) slicing
concatx L
x L 1x L 2
x LFigure 1. Modeling spatio-temporal correlation for 3D human
pose estimation by (a) utilizing spatio-temporal attention on all
joints in the entire video, (b) separating the framework into two
steps that respectively capture spatial and temporal context, and
(c) our Spatio-Temporal Criss-cross attention (STC), i.e., a two-
pathway block that models spatial and temporal information in
parallel. In the visualization of receptive field, the covered joints
of each attention strategy is marked as red nodes.
tial in numerous applications such as human-robot inter-
action [20, 43], virtual reality [11] and motion prediction
[27, 28]. The typical monocular solution is a two-stage
pipeline, which first extracts 2D keypoints by 2D human
pose detectors (e.g., [7] and [41]), and then lifts 2D coordi-
nates into 3D space [31]. Despite its simplicity, the second
stage is an ill-posed problem which lacks the depth prior,
and suffers from the ambiguity problem.
To mitigate this issue, several progresses propose to ag-
gregate the temporal cues in a video sequence to promote
pose estimation by grid convolutions [15,26,35], graph con-
volutions [4, 47] and multi-layer perceptrons [6, 21]. Re-
cently, Transformer structure has emerged as a dominant ar-
chitecture in both NLP and CV fields [8,24,45,49], and also
demonstrated high capability in modeling spatio-temporal
correlation for 3D human pose estimation [13, 22, 23, 25,
48, 52, 54]. Figure 1(a) illustrates a straightforward way
to exploit the transformer architecture for directly learning
spatio-temporal correlation between all joints in the entire
video sequence. However, the computational cost of calcu-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4790
lating the joint-to-joint affinity matrix in the self-attention
has a quadratic growth along the increase of number of
frames, making such solution unpractical for model train-
ing. As a result, most transformer structures employ a two-
step alternative, as shown in Figure 1(b), which encodes
spatial information for each frame first and then aggregates
the feature sequence by temporal transformer. Note that we
take spatial transformer as the frame encoder as an example
in the figure. This strategy basically mines the correlation
across frame-level features but seldom explores the relation
between joints across different frames.
In this paper, we propose a novel two-pathway attention
mechanism, namely Spatio-Temporal Criss-cross attention
(STC), that models spatial and temporal information in par-
allel, as depicted in Figure 1(c). Concretely, STC first slices
the input joint features into two partitions evenly with re-
spect to the channel dimension. On each partition, a Multi-
head Self-Attention (MSA) is implemented to encapsulate
the context along space or time axis. In between, the space
pathway computes the affinity between joints in each frame
independently, and the time pathway correlates the identi-
cal joint moving across different frames, i.e., the trajectory.
Then, STC recombines the learnt contexts from two path-
ways, and mixes the information across channels by Multi-
Layer Perceptrons (MLP). By doing so, the receptive field is
like a criss cross of spatial and temporal axes, and the com-
putational cost is O(T2S) +O(TS2). That is much lower
thanO(T2S2)of fully spatio-temporal attention, where T
andSdenote the number of frames and joints, respectively.
By stacking multiple STC blocks, we devise a new ar-
chitecture — STCFormer for 3D human pose estimation.
Furthermore, we delve into the crucial design of positional
embedding in STCFormer in the context of pose estimation.
The observations that joints in the same body part are either
highly relevant (static part) or not relevant but containing
moving patterns (dynamic part) motivate us to design a new
Structure-enhanced Positional Embedding (SPE). SPE con-
sists of two embedding functions for the static and dynamic
part, respectively. A part-aware embedding is to describe
the static part by indicating which part each joint belongs
to, and a spatio-temporal convolution around neighboring
joints aims to capture dynamic structure in local window.
We summarize the main contributions of this work as
follows. First, STC is a new type of decomposed spatio-
temporal attention for 3D human pose estimation in an eco-
nomic and effective way. Second, STCFormer is a novel
transformer architecture by stacking multiple STC blocks
and integrating the structure-enhanced positional embed-
ding. Extensive experiments conducted on Human3.6M and
MPI-INF-3DHP datasets demonstrate that STCFormer with
much less parameters achieves superior performances than
the state-of-the-art techniques. |
Sun_Consistent_Direct_Time-of-Flight_Video_Depth_Super-Resolution_CVPR_2023 | Abstract
Direct time-of-flight (dToF) sensors are promising for
next-generation on-device 3D sensing. However, limited by
manufacturing capabilities in a compact module, the dToF
data has a low spatial resolution ( e.g.∼20×30for iPhone
dToF), and it requires a super-resolution step before being
passed to downstream tasks. In this paper, we solve this
super-resolution problem by fusing the low-resolution dToF
data with the corresponding high-resolution RGB guidance.
Unlike the conventional RGB-guided depth enhancement
approaches, which perform the fusion in a per-frame man-
ner, we propose the first multi-frame fusion scheme to miti-
gate the spatial ambiguity resulting from the low-resolution
dToF imaging. In addition, dToF sensors provide unique
depth histogram information for each local patch, and we
incorporate this dToF-specific feature in our network design
to further alleviate spatial ambiguity. To evaluate our mod-
els on complex dynamic indoor environments and to pro-
vide a large-scale dToF sensor dataset, we introduce Dy-
DToF , the first synthetic RGB-dToF video dataset that fea-
tures dynamic objects and a realistic dToF simulator fol-lowing the physical imaging process. We believe the meth-
ods and dataset are beneficial to a broad community as
dToF depth sensing is becoming mainstream on mobile de-
vices. Our code and data are publicly available. https:
//github.com/facebookresearch/DVSR/
| 1. Introduction
On-device depth estimation is critical in navigation [41],
gaming [6], and augmented/virtual reality [3,8]. Previously,
various solutions based on stereo/structured-light sensors
and indirect time-of-flight sensors (iToF) [4,35,44,57] have
been proposed. Recently, direct time-of-flight (dToF) sen-
sor brought more interest in both academia [31, 36] and
industry [5], due to its high accuracy, compact form fac-
tor, and low power consumption [13, 37]. However, lim-
ited by the manufacturing capability, current dToF sensors
have very low spatial resolutions [13, 40]. Each dToF pixel
captures and pre-processes depth information from a local
patch in the scene (Sec. 3), leading to high spatial ambiguity
when estimating the high-resolution depth maps for down-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5075
stream tasks [8]. Previous RGB-guided depth completion
and super-resolution algorithms either assume high resolu-
tion spatial information (e.g. high resolution sampling po-
sitions) [34, 55] or simplified image formation models (e.g.
bilinear downsampling) [16, 33]. Simple network tweaking
and retraining is insufficient in handling the more ill-posed
dToF depth super-resolution task. As shown in Fig. 1, 2nd
column, the predictions suffer from geometric distortions
and flying pixels. Another fundamental limitation of these
previous approaches is they focus on single-frame process-
ing, while in real-world applications, the depth estimation is
expected in video (data-stream) format with certain tempo-
ral consistency. Processing an RGB-depth video frame-by-
frame ignores temporal correlations and leads to significant
temporal jittering in the depth estimations [32, 43, 58].
In this paper, we propose to tackle the spatial ambiguity
in low-resolution dToF data from two aspects: with infor-
mation aggregation between multiple frames in an RGB-
dToF video and with dToF histogram information. We
first design a deep-learning-based RGB-guided dToF video
super-resolution (DVSR) framework (Sec. 4.1) that con-
sumes a sequence of high-resolution RGB images and low-
resolution dToF depth maps, and predicts a sequence of
high-resolution depth maps. Inspired by the recent advances
in RGB video processing [11,30], we loosen the multi-view
stereo constraints and utilize flexible, false-tolerant inter-
frame alignments to make DVSR agnostic to static or dy-
namic environments. Compared to per-frame processing
baselines, DVSR significantly improves both prediction ac-
curacy and the temporal coherence, as shown in Fig. 1, 3rd
column. Please refer to the supplementary video for tempo-
ral visualizations.
Moreover, dToF sensors provide histogram information
due to their unique image formation model [13]. Instead
of a single depth value from other types of 3D sensors,
the histogram contains a distribution of depth values within
each low-resolution pixel. From this observation, we fur-
ther propose a histogram processing pipeline based on
the physical image formation model and integrate it into
the DVSR framework to form a histogram video super-
resolution (HVSR) network (Sec. 4.2). In this way, the
spatial ambiguity in the depth estimation process is further
lifted. As shown in Fig. 1 4th column, compared to DVSR,
the HVSR estimation quality is further improved, especially
for fine structures such as the compartments of the cabinet,
and it eliminates the flying pixels near edges.
Another important aspect for deep-learning-based depth
estimation models is the training and evaluation datasets.
Previously, both real-world captured and high quality syn-
thetic dataset have been widely used [21, 38, 46, 51]. How-
ever, none of them contain RGB-D video sequences with
significant amount of dynamic objects. To this end, we in-
troduce DyDToF, a synthetic dataset with diverse indoorscenes and animations of dynamic animals (e.g., cats and
dogs) (Sec. 6). We synthesize sequences of RGB images,
depth maps, surface normal maps, material albedos, and
camera poses. To the best of our knowledge, this is the first
dataset that provides dynamic indoor RGB-Depth video.
We integrate physics-based dToF sensor simulations in the
DyDToF dataset and analyze (1) how the proposed video
processing framework generalizes to dynamic scenes and
(2) how the low-level data modalities facilitate network
training and evaluation.
In summary, our contributions are in three folds:
• We introduce RGB-guided dToF video depth super-
resolution to resolve inherent spatial ambiguity in such
mobile 3D sensor.
• We propose neural network based RGB-dToF video
super-resolution algorithms to efficiently employ the
rich information contained in multi-frame videos and
the unique dToF histograms.
• We introduce the first synthetic dataset with physics-
based dToF sensor simulations and diverse dynamic
objects. We conduct systematic evaluations on the pro-
posed algorithm and dataset to verify the significant
improvements on accuracy and temporal coherence.
|