title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Tertikas_Generating_Part-Aware_Editable_3D_Shapes_Without_3D_Supervision_CVPR_2023 | Abstract
Impressive progress in generative models and implicit
representations gave rise to methods that can generate 3D
shapes of high quality. However, being able to locally con-
trol and edit shapes is another essential property that can
unlock several content creation applications. Local control
can be achieved with part-aware models, but existing meth-
ods require 3D supervision and cannot produce textures. In
this work, we devise PartNeRF , a novel part-aware gener-
ative model for editable 3D shape synthesis that does not
require any explicit 3D supervision. Our model generates
objects as a set of locally defined NeRFs, augmented with
an affine transformation. This enables several editing op-
erations such as applying transformations on parts, mixing
*Work done during internship at Stanford.parts from different objects etc. To ensure distinct, manip-
ulable parts we enforce a hard assignment of rays to parts
that makes sure that the color of each ray is only determined
by a single NeRF . As a result, altering one part does not af-
fect the appearance of the others. Evaluations on various
ShapeNet categories demonstrate the ability of our model to
generate editable 3D objects of improved fidelity, compared
to previous part-based generative approaches that require
3D supervision or models relying on NeRFs.
| 1. Introduction
Generating realistic and editable 3D content is a long-
standing problem in computer vision and graphics that has
recently gained more attention due to the increased demand
for 3D objects in AR/VR, robotics and gaming applications.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4466
However, manual creation of 3D models is a laborious en-
deavor that requires technical skills from highly experi-
enced artists and product designers. On the other hand, edit-
ing 3D shapes, typically involves re-purposing existing 3D
models, by manually changing faces and vertices of a mesh
and modifying its respective UV-map [95]. To accommo-
date this process, several recent works introduced genera-
tive models that go beyond generation and allow editing the
generated instances [13,18,52,55,62,77,101,116,117,124].
Shape editing involves making local changes on the shape
and the appearance of different parts of an object. There-
fore, having a basic understanding of the decomposition of
the object into parts facilitates controlling what to edit .
While Generative Adversarial Networks (GANs) [30]
have emerged as a powerful tool for synthesizing photore-
alistic images [7, 15, 16, 47–49], scaling them to 3D data
is non-trivial as they ignore the physics of image forma-
tion process.To address this, 3D-aware GANs incorporate
3D representations such as voxel grids [38, 72, 75] or com-
bine them with differentiable renderers [57, 126]. While
they faithfully recover the geometry and appearance, they
do not allow changing specific parts of the object.
Inspired by the rapid evolution of neural implicit ren-
dering techniques [68], recent works [8, 32, 77, 96, 114]
proposed to combine them with GANs in order to allow
for multi-view-consistent generations of high quality. De-
spite their impressive performance on novel view synthe-
sis, their editing capabilities are limited. To this end,
editing operations in the latent space have been explored
[21, 42, 62, 115, 124] but these approaches lack intuitive
control over the shape. By decomposing shapes into parts,
other works facilitate structure-aware shape manipulations
[40,70,88,111]. However, they require 3D supervision dur-
ing training and can only operate on textureless shapes.
To address these limitations, we devise PartNeRF, a
novel part-aware generative model, implemented as an auto-
decoder [5]. Our model enables part-level control, which fa-
cilitates various editing operations on the shape and appear-
ance of the generated instance. These operations include
rigid and non-rigid transformations on the object parts, part
mixing from different objects, removing/adding parts and
editing the appearance of specific parts of the object.
Our key idea is to represent objects using a set of locally
defined Neural Radiance Fields (NeRFs) that are arranged
such that the object can be plausibly rendered from a novel
view. To enable part-level control, we enforce a hard as-
signment between parts and rays that ensures that altering
one part does not affect the shape and appearance of the
others. Our model does not require 3D supervision; we only
assume supervision from images and object masks captured
from known cameras. We evaluate PartNeRF on various
ShapeNet categories and demonstrate that it generates tex-
tured shapes of higher fidelity than both part-based as wellas NeRF-based generative models. Furthermore, we show-
case several editing operations, not previously possible.
In summary, we make the following contributions :
We propose the first part-aware generative model that
parametrizes parts as NeRFs and can generate editable 3D
shapes. Unlike prior part-based approaches, our model
does not require explicit 3D supervision and can gener-
ate textured shapes. Compared to NeRF-based generative
models, our work is the first that reasons about parts and
hence enables operations both on the shape and the tex-
ture of the generated object. Code and data is available at
https://ktertikas.github.io/part nerf.
|
Truong_FREDOM_Fairness_Domain_Adaptation_Approach_to_Semantic_Scene_Understanding_CVPR_2023 | Abstract
Although Domain Adaptation in Semantic Scene Seg-
mentation has shown impressive improvement in recent
years, the fairness concerns in the domain adaptation have
yet to be well defined and addressed. In addition, fairness is
one of the most critical aspects when deploying the segmen-
tation models into human-related real-world applications,
e.g., autonomous driving, as any unfair predictions could
influence human safety. In this paper, we propose a novel
Fairness Domain Adaptation (FREDOM) approach to se-
mantic scene segmentation. In particular, from the proposed
formulated fairness objective, a new adaptation framework
will be introduced based on the fair treatment of class distri-
butions. Moreover, to generally model the context of struc-
tural dependency, a new conditional structural constraint is
introduced to impose the consistency of predicted segmen-
tation. Thanks to the proposed Conditional Structure Net-
work, the self-attention mechanism has sufficiently modeled
the structural information of segmentation. Through the ab-
lation studies, the proposed method has shown the perfor-
mance improvement of the segmentation models and pro-
moted fairness in the model predictions. The experimental
results on the two standard benchmarks, i.e., SYNTHIA →
Cityscapes and GTA5 →Cityscapes, have shown that our
method achieved State-of-the-Art (SOTA) performance1.
| 1. Introduction
Semantic segmentation has achieved remarkable results
in a wide range of practical problems, including scene un-
derstanding, autonomous driving, and medical imaging, by
using deep learning models, e.g., Convolutional Neural Net-
works (CNN) [3, 4, 24], Transformers [45]. Despite the
phenomenal achievement, these data-driven approaches still
need to improve in treating the prediction of each class. In
particular, the segmentation models typically treat unfairly
between classes in the dataset according to the class distri-
butions. It is known as the fairness problem of semantic
1The implementation of FREDOM is available at https : / /
github.com/uark-cviu/FREDOM
Figure 1. The class distributions on Cityscapes are defined for
Fairness problem and Long-tail problem. Inlong-tail problem,
several head classes frequently exist in the dataset, e.g., Pole, Traf-
fic Light, or Sign. Still, these classes belong to a minority group
in the fairness problem as their appearance on images does not oc-
cupy too many pixels. Our FREDOM has promoted the fairness of
models illustrated by an increase of mIoU on the minority group.
segmentation. The unfair predictions of segmentation mod-
els can lead to severe problems, e.g., in autonomous driving,
unfair predictions may result in wrong decisions in motion
planning control and therefore affect human safety. More-
over, the fairness issue of segmentation models is even well
observed or exaggerated when the trained models are de-
ployed into new domains. Many prior works alleviate the
performance drop on new domains by using unsupervised
domain adaptation, but these approaches do not guarantee
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19988
Figure 2. Illustration of the Presence of Classes between Major
(green boxes) and Minor (red boxes) Groups . Classes in the
minority group typically occupy fewer pixels than the ones in the
majority group (Best view in color and 2×zoom).
the fairness property.
There needs to be more attention on addressing the fair-
ness issue in semantic segmentation under the supervised or
domain adaptation settings. Besides, the definition of fair-
ness in semantic segmentation needs to be better defined
and, therefore, often needs clarification with the long-tail is-
sue in segmentation. In particular, the long-tail problem in
segmentation is typically caused by the number of existing
instances of each class in the dataset [21, 44]. Meanwhile,
thefairness problem in segmentation is considered for the
number of pixels of each class in the dataset. Although
there could be a correlation between fairness and long-tail
problems, these two issues are distinct. For example, sev-
eral objects constantly exist in the dataset, but their presence
often occupies only tiny regions of the given image (con-
taining a small number of pixels), e.g., the Pole, which is a
head class in Cityscapes, accounts for over 20% of instances
while the number of pixels does only less than 0.01% of
pixels. Hence, upon the fairness definition, it should belong
to the minor group of classes as its presence does not oc-
cupy many pixels in the image. Another example is Person,
which accounts for over 5%of instances, while the num-
ber of pixels does only less than 0.01% of pixels. Traffic
Lights or Signs also suffer a similar problem. Fig. 2 illus-
trates the appearance of classes in the majority and minority
groups. Therefore, although instances of these classes con-
stantly exist in the dataset, these are still being mistreated by
the segmentation model. Fig. 1 illustrates the class distribu-
tions defined based on long-tail and fairness, respectively.
Several works reduce the class imbalance effects us-
ing weighted (balanced) cross entropy [13, 21, 44], focal
loss [1], data augmentation or rare-class sampling tech-
niques [1,19]. Still, these need to address the fairness prob-
lem directly. Indeed, many prior domain adaptation meth-
ods [6, 17, 28, 34, 36–39] have been used to improve the
overall performance. However, these methods often ignore
unfair effects produced by the model caused by the imbal-
anced class distribution. Besides, in some adaptation ap-
proaches using entropy minimization [29, 42], the model’s
bias caused by the class imbalance between majority and
minority groups is even exaggerated [7, 35]. Meanwhile,
other approaches using re-weighted or focal loss [1] often
assume pixel independence and then penalize the loss con-
tribution of each pixel individually and ignore the structuralinformation of images. Then, pixel independence is relaxed
by adopting the Markovian assumption [3,48] to model seg-
mentation structures based on neighbor pixels. In the scope
of our work , we are interested in addressing the fairness
problem in semantic segmentation between classes under
the unsupervised domain adaptation setting. It should be
noted that our interested problem is practical. In real-world
applications (e.g., autonomous driving), deep learning mod-
els are typically deployed into new domains compared to
the training dataset. Then, unsupervised domain adaptation
plays a role in bridging the gap between the two domains.
Contributions of This Work: This work presents a novel
Unsupervised FairnessDom ain Adaptation (FREDOM)
approach to semantic segmentation. To the best of our
knowledge, this is one of the first works to address the fair-
ness problem in semantic segmentation under the domain
adaptation setting. Our contributions can be summarized
as follows. First, the new fairness objective is formulated
for semantic scene segmentation. Then, based on the fair-
ness metric, we propose a novel fairness domain adapta-
tion approach based on the fair treatment of class distribu-
tions. Second, the novel Conditional Structural Constraint
is proposed to model the structural consistency of segmen-
tation maps. Thanks to our introduced Conditional Struc-
ture Network, the spatial relationship and structure infor-
mation are well modeled by the self-attention mechanism.
Significantly, our structural constraint relaxes the assump-
tion of pixel independence held by prior approaches and
generalizes the Markovian assumption by considering the
structural correlations between all pixels. Finally, our ab-
lation studies have shown the effectiveness of different as-
pects in our approach to the fairness improvement of seg-
mentation models. Through experiments, our FREDOM
has promoted the fairness property of segmentation models
and achieved state-of-the-art (SOTA) performance on two
standard benchmarks of unsupervised domain adaptation,
i.e., SYNTHIA →Cityscapes and GTA5 →Cityscapes.
|
Tan_Sample-Level_Multi-View_Graph_Clustering_CVPR_2023 | Abstract
Multi-view clustering has hitherto been studied due to
their effectiveness in dealing with heterogeneous data. De-
spite the empirical success made by recent works, there
still exists several severe challenges. Particularly, previous
multi-view clustering algorithms seldom consider the topo-
logical structure in data, which is essential for clustering
data on manifold. Moreover, existing methods cannot fully
explore the consistency of local structures between different
views as they uncover the clustering structure in a intra-
view way instead of a inter-view manner. In this paper, we
propose to exploit the implied data manifold by learning
the topological structure of data. Besides, considering that
the consistency of multiple views is manifested in the gen-
erally similar local structure while the inconsistent struc-
tures are the minority, we further explore the intersections
of multiple views in the sample level such that the cross-view
consistency can be better maintained. We model the above
concerns in a unified framework and design an efficient al-
gorithm to solve the corresponding optimization problem.
Experimental results on various multi-view datasets certifi-
cate the effectiveness of the proposed method and verify its
superiority over other SOTA approaches.
| 1. Introduction
In many real scenarios, data are usually collected from
diverse sources in various domains or described by multi-
ple feature sets [1, 2]. A case in point is the image dataset,
which can be represented by different visual descriptors,
such as LBP, GIST, CENTRIST, HOG, SIFT and Color Mo-
ment [3]. Moreover, a document can be written in differ-
ent languages, and a website can be described by its link-
age, page content, etc. These data are generally known as
multi-view data [1, 4]. Each view contains partial and com-
plementary information, any of which suffices for learning,
and they all together agree to a consensus latent represen-
tation [5–7]. Multi-view clustering aims to accurately par-
*Corresponding author.tition data into distinct clusters according to their compati-
ble and complementary information embedded in heteroge-
neous features [1, 8].
In general, multi-view clustering methods can be di-
vided into four categories according to the mechanisms
and principles on which these methods are based. [1] ini-
tiated the trend of co-training algorithm by first carrying
out a co-training strategy and making the clustering re-
sults on all view tend to each other as well as satisfying
the broadest agreement across all views. Owing to the ef-
ficiency of exploiting similarities among multiple views,
multi-kernel learning is widely utilized to boost the perfor-
mance of multi-view clustering methods [9–11]. Multi-task
multi-view clustering [12,13] inherits the property of multi-
task clustering. Each view of the data is treated with at least
one related task. Moreover, inter-task knowledge is trans-
ferred from one to another so that the relationship between
multi-view and multi-task is fully exploited to improve the
clustering outcomes. Another kind of graph-based multi-
view clustering methods usually attempt to explore an opti-
mal consensus graph across all views, and utilize graph-cut
algorithm on the optimal graph to obtain final clustering re-
sult [3, 5, 14].
A great number of multi-view clustering methods have
been proposed and illustrated remarkable empirical suc-
cesses up to now [15–17]. However, there are still severe
drawbacks urgently need to be overcome. For one thing,
existing multi-view clustering algorithms seldom consider
the topological structure in data, which is essential for clus-
tering data on manifold. Considering that the data sampled
from real world typically lie in the nonlinear manifold, data
points geometrically far from one to another may keep high
consistency when they are linked by a series of consecutive
neighbors. For another, they suffer from a coarse-grained
problem that ignores the view correlation between samples.
Note that redundancy or partial structure mistakes in certain
views may lead to a sub-optimal cluster structure. Besides,
traditional view-wise fusion strategy would result in a su-
perimposition of redundancies, and hence acquire a more
imprecise common cluster structure.
Regarding the above issues, we propose to exploit the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23966
implied data manifold by learning the topological structure
of data. Besides, considering that the consistency of multi-
ple views is manifested in the generally similar local struc-
ture while the inconsistent structures are the minority, we
further explore the intersections of multiple views in the
sample level such that the cross-view consistency can be
better maintained. By leveraging the subtasks of topologi-
cal relevance learning and the sample-level graph fusion in
our collaborative model, each subtask is alternately boosted
towards an optimal solution. Experimental results on var-
ious multi-view datasets certificate the effectiveness of the
proposed method.
|
Sun_TRACE_5D_Temporal_Regression_of_Avatars_With_Dynamic_Cameras_in_CVPR_2023 | Abstract
Although the estimation of 3D human pose and shape
(HPS) is rapidly progressing, current methods still can-
not reliably estimate moving humans in global coordinates,
which is critical for many applications. This is particularly
challenging when the camera is also moving, entangling
human and camera motion. To address these issues, we
adopt a novel 5D representation (space, time, and identity)
that enables end-to-end reasoning about people in scenes.
Our method, called TRACE, introduces several novel ar-
*This work was done when Yu Sun was an intern at JD.com.
†Corresponding author.chitectural components. Most importantly, it uses two new
“maps” to reason about the 3D trajectory of people over
time in camera, and world, coordinates. An additional
memory unit enables persistent tracking of people even dur-
ing long occlusions. TRACE is the first one-stage method to
jointly recover and track 3D humans in global coordinates
from dynamic cameras. By training it end-to-end, and us-
ing full image information, TRACE achieves state-of-the-art
performance on tracking and HPS benchmarks. The code1
and dataset2are released for research purposes.
1https://www.yusun.work/TRACE/TRACE.html
2https://github.com/Arthur151/DynaCam
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8856
| 1. Introduction
The estimation of 3D human pose and shape (HPS) has
many applications and there has been significant recent
progress [6, 7, 21–24, 37, 38, 46, 49, 56, 57, 60]. Most meth-
ods, however, reason only about a single frame at a time and
estimate humans in camera coordinates . Moreover, such
methods do not track people and are unable to recover their
global trajectories. The problem is even harder in typical
hand-held videos, which are filmed with a dynamic, mov-
ing, camera. For many applications of HPS, single-frame
estimates in camera coordinates are not sufficient. To cap-
ture human movement and then transfer it to a new 3D
scene, we must have the movement in a coherent global co-
ordinate system. This is a requirement for computer graph-
ics, sports, video games, and extended reality (XR).
Our key insight is that most methods estimate humans in
3D, whereas the true problem is 5D. That is, a method needs
to reason about 3D space, time, and subject identity. With a
5D representation, the problem becomes tractable, enabling
aholistic solution that can exploit the full video to infer
multiple people in a coherent global coordinate frame. As
illustrated in Fig. 1, we develop a unified method to jointly
regress the 3D pose, shape, identity, and global trajectory
of the subjects in global coordinates from monocular videos
captured by dynamic cameras (DC-videos).
To achieve this, we deal with two main challenges. First,
DC-videos contain both human motion and camera motion
and these must be disentangled to recover the human trajec-
tory in global coordinates. One idea would be to recover the
camera motion relative to the rigid scene using structure-
from-motion (SfM) methods (e.g. [31]). In scenes contain-
ing many people and human motion, however, such meth-
ods can be unreliable. An alternative approach is taken by
GLAMR [58], which infers global human trajectories from
local 3D human poses, without taking into account the full
scene. By ignoring evidence from the full image, GLAMR
fails to capture the correct global motion in common scenar-
ios, such as biking, skating, boating, running on a treadmill,
etc. Moreover, GLAMR is a multi-stage method, with each
stage dependent on accurate estimates from the preceding
one. Such approaches are more brittle than our holistic,
end-to-end, method.
The other challenge, as shown in the upper right corner
of Fig. 1, is that severe occlusions are common in videos
with multiple people. Currently, the most popular tracking
strategy is to infer the association between 2D detections
using a temporal prior (e.g. Kalman filter) [63]. However,
in DC-videos, human motions are often irregular and can
easily violate hand-crafted priors. PHALP [40] is one of
the few methods to address this for 3D HPS. It uses a clas-
sical, multi-stage, detection-and-tracking formulation with
heuristic temporal priors. It does not holistically reason
about the sequence and is not trained end-to-end.To address these issues, we reason about people using
a 5D representation and capture information from the full
image and the motion of the scene. This holistic reason-
ing enables the reliable recovery of global human trajecto-
ries and subject tracking using a single-shot method. This
is more reliable than multi-stage methods because the net-
work can exploit more information to solve the task and is
trained end-to-end. No hand-crafted priors are needed and
the network is able to share information among modules.
Specifically, we develop TRACE, a unified one-stage
method for Temporal Regression of Avatars with dynamic
Cameras in 3D Environments. The architecture is inspired
by BEV [45], which directly estimates multiple people in
depth from a single image using multiple 2D maps. BEV
uses a 2D map representing an imaginary, “top down”, view
of the scene. This is combined with an image-centric 2D
map to reason about people in 3D. Our key insight is that
the idea of maps can be extended to represent how people
move in 3D . With this idea, TRACE introduces three new
modules to holistically model 5D human states, perform-
ing multi-person temporal association, and inferring human
trajectories in global coordinates; see Fig. 2.
First, to construct a holistic 5D representation of the
video, we extract temporal image features by fusing single-
frame feature maps from the image backbone with a tem-
poral feature propagation module. We also compute the
optical flow between adjacent frames with a motion back-
bone. The optical flow provides short-term motion features
that carry information about the motion of the scene and
the people. Second, to explicitly track human motions, we
introduce a novel 3D motion offset map to establish the as-
sociation of the same person across adjacent frames. This
map contains a 3D offset vector at each position, which rep-
resents the difference between the 3D positions of the same
subject from the previous frame to the current frame in cam-
era coordinates. We also introduce a memory unit to keep
track of subjects under long-term occlusion. Note that the
3D trajectories are built in camera space, and TRACE uses
a novel world motion map that transfers the trajectories to
global coordinates. At each position, this map contains a
6D vector to represent the difference between the 3D posi-
tions of the corresponding subject from the previous frame
to the current frame and its 3D orientation in world coordi-
nates. Taken together, this novel network architecture goes
beyond prior work by taking information from the full video
frames to address detection, pose estimation, tracking, and
occlusion in a holistic network that is trained end-to-end.
To enable training and evaluation of global human tra-
jectory estimation from in-the-wild DC-videos, we build a
new dataset, DynaCam. Since collecting global human tra-
jectories and camera poses with in-the-wild DC-videos is
difficult, we simulate a moving camera using publicly avail-
able in-the-wild panoramic videos and regular videos cap-
8857
tured by static cameras. In this way, we create more than
500 in-the-wild DC-videos with precise camera pose anno-
tations. Then we generate pseudo-ground-truth 3D human
annotations via fitting [20] SMPL [32] to detected 2D pose
sequences [17,54,63]. With 2D/3D human pose and camera
pose annotations, we can obtain the global human trajecto-
ries using the PnP algorithm [16]. This dataset is sufficient
to train TRACE to deal with dynamic cameras.
We evaluate TRACE on two multi-person in-the-wild
benchmarks (3DPW [50] and MuPoTS-3D [35]) and our
DynaCam dataset. On 3DPW, TRACE achieves the state-
of-the-art (SOTA) PA-MPJPE of 37.8mm, less the cur-
rent best (42.7 [26]). On MuPoTS-3D, TRACE outper-
forms previous 3D-representation-based methods [39, 40]
and tracking-by-detection methods [63] on tracking people
under long-term occlusion. On DynaCam, TRACE outper-
forms GLAMR [58] in estimating the 3D human trajectory
in global coordinates from DC-videos.
In summary, our main contributions are: (1) We intro-
duce a 5D representation and use it to learn holistic tempo-
ral cues related to both 3D human motions and the scene.
(2) We introduce two novel motion offset representations
to explicitly model temporal multi-subject association and
global human trajectories from temporal clues in an end-to-
end manner. (3) We estimate long-term 3D human motions
over time in global coordinates, achieving SOTA results. (4)
We collect the DynaCam dataset of DC-videos with pseudo
ground truth, which facilitates the training and evaluation of
global human trajectory estimation. The code and dataset
are publicly available for research purposes.
|
Tan_Distilling_Neural_Fields_for_Real-Time_Articulated_Shape_Reconstruction_CVPR_2023 | Abstract
We present a method for reconstructing articulated 3D
models from videos in real-time, without test-time optimiza-
tion or manual 3D supervision at training time. Prior
work often relies on pre-built deformable models (e.g.
SMAL/SMPL), or slow per-scene optimization through dif-
ferentiable rendering (e.g. dynamic NeRFs). Such methods
fail to support arbitrary object categories, or are unsuit-
able for real-time applications. To address the challenge
of collecting large-scale 3D training data for arbitrary de-
formable object categories, our key insight is to use off-
the-shelf video-based dynamic NeRFs as 3D supervision to
train a fast feed-forward network, turning 3D shape and
motion prediction into a supervised distillation task. Our
temporal-aware network uses articulated bones and blend
skinning to represent arbitrary deformations, and is self-
supervised on video datasets without requiring 3D shapes
or viewpoints as input. Through distillation, our network
learns to 3D-reconstruct unseen articulated objects at in-
teractive frame rates. Our method yields higher-fidelity 3D
reconstructions than prior real-time methods for animals,
with the ability to render realistic images at novel view-
points and poses. | 1. Introduction
We are interested in building high-quality animatable
models of articulated 3D objects from videos in real time.
One promising application is virtual and augmented real-
ity, where the goal is to create high-fidelity 3D experiences
from images and videos captured live by users. For rigid
scenes, structure from motion (SfM) and neural rendering
can be used to build accurate 3D cities and landmarks from
Internet image collections [1, 20, 33]. For articulated ob-
jects such as friends and pets, many works parameterize the
range of motions using category-specific templates such as
SMPL [18] for humans and SMAL [4] for quadruped ani-
mals. Although these methods can be trained on large-scale
video datasets, they rely on parametric body template mod-
els built from extensive real-world 3D scans: these body
models are not easy to generate for diverse categories in the
wild such as clothed humans or pets with distinct morpholo-
gies, which are often the focus of user content.
Inspired by the breakthrough success of neural radiance
fields [21], many works reconstruct arbitrary articulated ob-
jects in an analysis-by-synthesis framework [16, 27, 28, 30,
36, 44] by defining time-dependent 3D warping fields and
establishing long-range correspondences on top of canon-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4692
ical shape and appearance models. These methods out-
put high-quality reconstructions of arbitrary objects without
3D data or pre-defined templates, but the output represen-
tations are scene-specific and often require hours to com-
pute from scratch on unseen videos - an unacceptable cost
for real-time VR/AR tasks. We are thus interested in dy-
namic 3D reconstruction algorithms that achieve the best
of both worlds: the speed of template-based models and
the quality and generalization ability of dynamic NeRFs.
To achieve this, our key insight is remarkably simple: we
train category-specific feed-forward 3D predictors at scale
by self-supervising them with dynamic NeRF “teachers” fit-
ted to offline video data.
By leveraging scene-fitted dynamic NeRFs for 3D su-
pervision at scale, our method learns a feed-forward pre-
dictor for appearance, 3D shape, and articulations of non-
rigid objects from videos. Our learned 3D models use lin-
ear blend skinning to express articulations, allowing it to
be animated by manipulating bone transformations. We ad-
dress three key challenges in our work: (1) how to super-
vise feed-forward models with internal representations of
dynamic NeRFs, (2) how to produce temporally consistent
predictions of pose, articulation, and appearance, and (3)
how to build efficient systems for real-time reconstruction.
|
Srivastava_How_You_Feelin_Learning_Emotions_and_Mental_States_in_Movie_CVPR_2023 | Abstract
Movie story analysis requires understanding characters’
emotions and mental states. Towards this goal, we for-
mulate emotion understanding as predicting a diverse and
multi-label set of emotions at the level of a movie scene
and for each character. We propose EmoTx, a multimodal
Transformer-based architecture that ingests videos, multi-
ple characters, and dialog utterances to make joint pre-
dictions. By leveraging annotations from the MovieGraphs
dataset [72], we aim to predict classic emotions ( e.g. happy,
angry) and other mental states ( e.g. honest, helpful). We
conduct experiments on the most frequently occurring 10
and 25 labels, and a mapping that clusters 181 labels to
26. Ablation studies and comparison against adapted state-
of-the-art emotion recognition approaches shows the effec-
tiveness of EmoTx. Analyzing EmoTx’s self-attention scores
reveals that expressive emotions often look at character to-
kens while other mental states rely on video and dialog cues.
| 1. Introduction
In the movie The Pursuit of Happyness , we see the pro-
tagonist experience a roller-coaster of emotions from the
lows of breakup and homelessness to the highs of getting
selected for a coveted job. Such heightened emotions are of-
ten useful to draw the audience in through relatable events
as one empathizes with the character(s). For machines to
understand such a movie (broadly, story), we argue that it
is paramount to track how characters’ emotions and men-
tal states evolve over time. Towards this goal, we lever-
age annotations from MovieGraphs [72] and train models
to watch the video, read the dialog, and predict the emo-
tions and mental states of characters in each movie scene.
Emotions are a deeply-studied topic. From ancient Rome
and Cicero’s 4-way classification [60], to modern brain re-
search [33], emotions have fascinated humanity. Psychol-
ogists use of Plutchik’s wheel [53] or the proposal of uni-
versality in facial expressions by Ekman [18], structure has
been provided to this field through various theories. Affec-
tive emotions are also grouped into mental (affective, be-
- And he's very bitter .
- And he's just gonna walk out the
door and never know why she's
just lying there on the couch... - That's a chick's movie.
- I would say so.
- What kind of a person would write
to someone they heard on the radio?- Stop it.
- Richard Jaeckel had on this shiny
helmet 'cause he was the M.P .
- No more. Oh, God, I love that movie.
CBA
??? ??? ???Figure 1. Multimodal models and multi-label emotions are neces-
sary for understanding the story. A: What character emotions can
we sense in this scene? Is a single label enough? B: Without the
dialog, can we try to guess the emotions of the Sergeant and the
Soldier. C: Is it possible to infer the emotions from the characters’
facial expressions (without subtitles and visual background) only?
Check the footnote below for the ground-truth emotion labels for
these scenes and the supplement for an explanation of the story.
havioral, and cognitive) or bodily states [13].
A recent work on recognizing emotions with visual con-
text, Emotic [31] identifies 26 label clusters and proposes
amulti-label setup wherein an image may exhibit multi-
ple emotions ( e.g.peace, engagement ). An alternative to
the categorical space, valence, arousal, and dominance are
also used as three continuous dimensions [31]. Predicting a
rich set of emotions requires analyzing multiple contextual
modalities [31, 34, 44]. Popular directions in multimodal
emotion recognition are Emotion Recognition in Conver-
sations (ERC) that classifies the emotion for every dialog
utterance [42,54,83]; or predicting a single valence-activity
score for short ∼10s movie clips [4, 45].
We operate at the level of a movie scene : a set of shots
telling a sub-story, typically at one location, among a de-
fined cast, and in a short time span of 30to60 s. Thus,
scenes are considerably longer than single dialogs [54] or
Ground-truth emotions and mental states portrayed in movie scenes
in Fig. 1: A: excited, curious, confused, annoyed, alarmed; B: shocked,
confident; C: happy, excited, amused, shocked, confident, nervous.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2517
movie clips in [4]. We predict emotions and mental states
for all characters in the scene and also by accumulating la-
bels at the scene level. Estimation on a larger time window
naturally lends itself to multi-label classification as charac-
ters may portray multiple emotions simultaneously ( e.g.cu-
rious andconfused ) or have transitions due to interactions
with other characters ( e.g.worried tocalm ).
We perform experiments with multiple label sets: Top-
10 or 25 most frequently occurring emotion labels in
MovieGraphs [72] or a mapping to the 26 labels in the
Emotic space, created by [45]. While emotions can broadly
be considered as part of mental states, for this work, we con-
sider that expressed emotions are apparent by looking at the
character, e.g.surprise, sad, angry ; and mental states are la-
tent and only evident through interactions or dialog, e.g.po-
lite, determined, confident, helpful1. We posit that classifi-
cation in a rich label space of emotions requires looking
at multimodal context as evident from masking context in
Fig. 1. To this end, we propose EmoTx that jointly models
video frames, dialog utterances, and character appearance.
We summarize our contributions as follows: (i) Building
on rich annotations from MovieGraphs [72], we formulate
scene and per-character emotion and mental state classifi-
cation as a multi-label problem. (ii) We propose a multi-
modal Transformer-based architecture EmoTx that predicts
emotions by ingesting all information relevant to the movie
scene. EmoTx is also able to capture label co-occurrence
and jointly predicts all labels. (iii) We adapt several pre-
vious works on emotion recognition for this task and show
that our approach outperforms them all. (iv) Through analy-
sis of the self-attention mechanism, we show that the model
learns to look at relevant modalities at the right time. Self-
attention scores also shed light on our model’s treatment of
expressive emotions vs. mental states.
|
Tripathi_Edges_to_Shapes_to_Concepts_Adversarial_Augmentation_for_Robust_Vision_CVPR_2023 | Abstract
Recent work has shown that deep vision models tend
to be overly dependent on low-level or “texture” features,
leading to poor generalization. Various data augmentation
strategies have been proposed to overcome this so-called
texture bias in DNNs. We propose a simple, lightweight
adversarial augmentation technique that explicitly incen-
tivizes the network to learn holistic shapes for accurate
prediction in an object classification setting. Our augmen-
tations superpose edgemaps from one image onto another
image with shuffled patches, using a randomly determined
mixing proportion, with the image label of the edgemap im-
age. To classify these augmented images, the model needs to
not only detect and focus on edges but distinguish between
relevant and spurious edges. We show that our augmenta-
tions significantly improve classification accuracy and ro-
bustness measures on a range of datasets and neural ar-
chitectures. As an example, for ViT-S, We obtain absolute
gains on classification accuracy gains up to 6%. We also
obtain gains of up to 28% and 8.5% on natural adversarial
and out-of-distribution datasets like ImageNet-A (for ViT-
B) and ImageNet-R (for ViT-S), respectively. Analysis us-
ing a range of probe datasets shows substantially increased
shape sensitivity in our trained models, explaining the ob-
served improvement in robustness and classification accu-
racy.
| 1. Introduction
A growing body of research catalogues and analyzes ap-
parent failure modes of deep vision models. For instance,
work on texture bias [1,7,12] suggests that image classifiers
are overdependent on textural cues and fail against simple
(adversarial) texture substitutions. Relatedly, the idea of
simplicity bias [25] captures the tendency of deep models
to use weakly predictive “simple” features such as color or
texture, even in the presence of strongly predictive com-
plex features. In psychology & neuroscience, too, evidence
*Work done at Google Research India.
ImageNet-C mCE
(Lower is Better)ImageNet-A Accuracy
(Higher is Better)
Shape FactorR50
R50R101
R101R152
R152R50(T)
R50(T)R101(T)
R152(T)
R101(T)R152(T)ViT-S
ViT-SViT-S(T)
ViT-S(T)ViT-S(E)
ViT-S(E)R152(E)
R152(E)R101(E)
R101(E)R50(E)
R50(E)Figure 1. Comparison of the models on robustness and shape-
bias. The shape factor gives the fraction of dimensions that en-
code shape cues [17]. Backbone(T) denotes texture shape debiased
(TSD) models [21]. In comparison, EL EAS denoted by Back-
bone(E) is more shape biased and shows better performance on
ImageNet-C and ImageNet-A datasets.
suggests that deep networks focus more on “local” features
rather than global features and differ from human behav-
ior in related tasks [19]. More broadly speaking, there is
a mismatch between the cognitive concepts and associated
world knowledge implied by the category labels in image
datasets such as Imagenet and the actual information con-
tent made available to a model via one-hot vectors encod-
ing these labels. In the face of under-determined learning
problems, we need to introduce inductive biases to guide
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24470
Figure 2. Representative performance comparison of our model, with the ‘Debiased’ models (TSD [21]) on ImageNet-A, R, C, and
Sketch datasets. The models trained using EL EAS show improved performance on out-of-distribution robustness datasets. The large
performance improvement on the ImageNet-A dataset indicates better robustness to natural adversarial examples.
the learning process. To this end, Geirhos et al. [7] pro-
posed a data augmentation method wherein the texture of
an image was replaced with that of a painting through styl-
ization. Follow-on work improved upon this approach by
replacing textures from other objects (instead of paintings)
and teaching the model to separately label the outer shape
and the substituted texture according to their source image
categories [21]. Both these approaches discourage overde-
pendence on textural features in the learned model; how-
ever, they do not explicitly incentivize shape recognition.
We propose a lightweight adversarial augmentation
technique EL EAS (EdgeLearning for Shape sensitivity)
that is designed to increase shape sensitivity in vision mod-
els. Specifically, we augment a dataset with superposi-
tions of random pairs of images from the dataset, where
one image is processed to produce an edge map, and the
other is modified by shuffling the location of image patches
within the image. The two images are superposed us-
ing a randomly sampled relative mixing weight (similar to
Mixup [30]), and the new superposed image is assigned the
label of the edgemap image. EL EAS is designed to specif-
ically incentivize not only edge detection but shape sensi-
tivity: 1) classifying the edgemap image requires the model
to extract and exploit edges – the only features available
in the image, 2) distinguishing the edgemap object cate-
gory from the superposed shuffled image requires the model
to distinguish the overall edgemap object shape (relevant
edges) from the shuffled image edges (irrelevant edges, less
likely to be “shape like”). We perform extensive experi-
ments over a range of model architectures, image classi-
fication datasets, and probe datasets, comparing EL EAS
against recent baselines. Figure 1 provides a small visual
sample of our findings and results; across various models,
a measure of shape sensitivity [17] correlates very strongly
with measures of classifier robustness [11, 12] (see Resultsfor more details), validating the shape sensitivity inductive
bias. In addition, for a number of model architectures, mod-
els trained with EL EAS significantly improve both mea-
sures compared to the previous SOTA data augmentation
approach [21].
Summing up, we make the following contributions:
• We propose an adversarial augmentation technique,
ELEAS, designed to incentivize shape sensitivity
in vision models. Our augmentation technique is
lightweight (needing only an off-the-shelf edge detec-
tion method) compared to previous proposals that re-
quire expensive GAN-based image synthesis [7, 21].
• In experiments, EL EAS shows increased shape sen-
sitivity on a wide range of tasks designed to probe
this property. Consequently, we obtain increased ac-
curacy in object classification with a 6% improvement
on ImageNet-1K classification accuracy for ViT-Small
among others.
• EL EAS shows high generalizability and out of dis-
tribution robustness with 14.2% improvement in
ImageNet-C classification performance and 5.89% in-
crease in shape-bias for Resnet152.
|
Tanay_Efficient_View_Synthesis_and_3D-Based_Multi-Frame_Denoising_With_Multiplane_Feature_CVPR_2023 | Abstract
While current multi-frame restoration methods combine
information from multiple input images using 2D align-
ment techniques, recent advances in novel view synthesis
are paving the way for a new paradigm relying on volu-
metric scene representations. In this work, we introduce
the first 3D-based multi-frame denoising method that sig-
nificantly outperforms its 2D-based counterparts with lower
computational requirements. Our method extends the mul-
tiplane image (MPI) framework for novel view synthesis
by introducing a learnable encoder-renderer pair manip-
ulating multiplane representations in feature space. The
encoder fuses information across views and operates in a
depth-wise manner while the renderer fuses information
across depths and operates in a view-wise manner. The
two modules are trained end-to-end and learn to separate
depths in an unsupervised way, giving rise to Multiplane
Feature (MPF) representations. Experiments on the Spaces
and Real Forward-Facing datasets as well as on raw burst
data validate our approach for view synthesis, multi-frame
denoising, and view synthesis under noisy conditions.
| 1. Introduction
Multi-frame denoising is a classical problem of com-
puter vision where a noise process affecting a set of images
must be inverted. The main challenge is to extract con-
sistent information across images effectively and the cur-
rent state of the art relies on optical flow-based 2D align-
ment [3, 7, 45]. Novel view synthesis, on the other hand, is
a classical problem of computer graphics where a scene is
viewed from one or more camera positions and the task is
to predict novel views from target camera positions. This
problem requires to reason about the 3D structure of the
scene and is typically solved using some form of volumetric
representation [28, 32, 55]. Although the two problems are
traditionally considered distinct, some novel view synthe-
sis approaches have recently been observed to handle noisy
Depth -wise
EncoderView -wise
RendererNoisy
inputsSynthesized
outputsMultiplane Feature
Representation
Noisy inputs IBRNet [48] NAN [31] MPFER (ours)
Figure 1. Top: Our Multiplane Features Encoder-Renderer
(MPFER) reimagines the MPI pipeline by moving the multiplane
representation to feature space. Bottom: MPFER significantly out-
performs existing methods in multiple challenging scenarios, in-
cluding here, novel view synthesis from 8 highly degraded inputs.
inputs well, and to have a denoising effect in synthesized
views by discarding inconsistent information across input
views [18,26]. This observation opens the door to 3D-based
multi-frame denoising, by recasting the problem as a special
case of novel view synthesis where the input views are noisy
and the target views are the clean input views [26, 31].
Recently, novel view synthesis has been approached as
an encoding-rendering process where a scene representa-
tion is first encoded from a set of input images and an ar-
bitrary number of novel views are then rendered from this
scene representation. In the Neural Radiance Field (NeRF)
framework for instance, the scene representation is a radi-
ance field function encoded by training a neural network on
the input views. Novel views are then rendered by querying
and integrating this radiance field function over light rays
originating from a target camera position [2, 24, 28]. In the
Multiplane Image (MPI) framework on the other hand, the
scene representation is a stack of semi-transparent colored
layers arranged at various depths, encoded by feeding the
input views to a neural network trained on a large number
of scenes. Novel views are then rendered by warping and
overcompositing the semi-transparent layers [8, 41, 55].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20898
In the present work, we adopt the MPI framework be-
cause it is much lighter than the NeRF framework compu-
tationally. The encoding stage only requires one inference
pass on a network that generalizes to new scenes instead
of training one neural network per-scene, and the rendering
stage is essentially free instead of requiring a large number
of inference passes. However, the standard MPI pipeline
struggles to predict multiplane representations that are self-
consistent across depths from multiple viewpoints. This
problem can lead to depth-discretization artifacts in syn-
thesized views [40] and has previously been addressed at
the encoding stage using computationally expensive mech-
anisms and a large number of depth planes [8, 11, 27, 40].
Here, we propose to enforce cross-depth consistency at the
rendering stage by replacing the fixed overcompositing op-
erator with a learnable renderer. This change of approach
has three important implications. First, the encoder mod-
ule can now process depths independently from each other
and focus on fusing information across views. This sig-
nificantly reduces the computational load of the encoding
stage. Second, the scene representation changes from a
static MPI to Multiplane Features (MPF) rendered dynam-
ically. This significantly increases the expressive power of
the scene encoding. Finally, the framework’s overall per-
formance is greatly improved, making it suitable for novel
scenarios including multi-frame denoising where it outper-
forms standard 2D-based approaches at a fraction of their
computational cost. Our main contributions are as follow:
• We solve the cross-depth consistency problem for multi-
plane representations at the rendering stage, by introduc-
ing a learnable renderer.
• We introduce the Multiplane Feature (MPF) representa-
tion, a generalization of the multiplane image with higher
representational power.
• We re-purpose the multiplane image framework origi-
nally developed for novel view synthesis to perform 3D-
based multi-frame denoising.
• We validate the approach with experiments on 3 tasks and
3 datasets and significantly outperform existing 2D-based
and 3D-based methods for multi-frame denoising.
|
Tang_Master_Meta_Style_Transformer_for_Controllable_Zero-Shot_and_Few-Shot_Artistic_CVPR_2023 | Abstract
Transformer-based models achieve favorable perfor-
mance in artistic style transfer recently thanks to its
global receptive field and powerful multi-head/layer atten-
tion operations. Nevertheless, the over-paramerized multi-
layer structure increases parameters significantly and thus
presents a heavy burden for training. Moreover, for the
task of style transfer, vanilla Transformer that fuses con-
tent and style features by residual connections is prone to
content-wise distortion. In this paper, we devise a novel
Transformer model termed as Master specifically for style
transfer. On the one hand, in the proposed model, differ-
ent Transformer layers share a common group of parame-
ters, which (1) reduces the total number of parameters, (2)
leads to more robust training convergence, and (3) is read-
ily to control the degree of stylization via tuning the num-
ber of stacked layers freely during inference. On the other
hand, different from the vanilla version, we adopt a learn-
able scaling operation on content features before content-
style feature interaction, which better preserves the original
similarity between a pair of content features while ensuring
the stylization quality. We also propose a novel meta learn-
ing scheme for the proposed model so that it can not only
work in the typical setting of arbitrary style transfer, but
also adaptable to the few-shot setting, by only fine-tuning
the Transformer encoder layer in the few-shot stage for one
*Equal contribution.
†Corresponding author.specific style. Text-guided few-shot style transfer is firstly
achieved with the proposed framework. Extensive exper-
iments demonstrate the superiority of Master under both
zero-shot and few-shot style transfer settings.
| 1. Introduction
Artistic style transfer aims at applying style patterns like
colors and textures of a reference image to a given content
image while preserving the semantic structure of the con-
tent. In contrast to the pioneering optimization method [9]
and early per-style-per-model methods like [17, 26], arbi-
trary style transfer methods [3, 13, 16, 21, 22, 24, 31] enable
real time style transfer for any style image in the test time in
a zero-shot manner. The flexibility has led to this arbitrary-
style-per-model fashion to dominate style transfer research.
Recently, to enhance the representation of global infor-
mation in arbitrary style transfer, Transformer [40] is intro-
duced to this area [4], leveraging the global receptive field
and powerful multi-head/layer structure, and achieves su-
perior performance. Nevertheless, the over-parameterized
multi-layer structure increases model parameters signifi-
cantly. As shown in Fig. 1(a), there are 25.94M learn-
able parameters for a 3-layer Transformer structure in
StyTr2 [4], v.s. 3.50M in AdaIN [13], a simple but effective
baseline in arbitrary style transfer. Such a large model for
standard Transformer inevitably presents a heavy burden for
training. As shown in Fig. 1(b), when there are more than
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18329
𝑐!𝑐"𝑐𝑠!𝑐𝑠"𝑐!#𝑐"#𝑐𝑠!𝑐𝑠"𝑠
Style / ContentVanilla Transformer w Residual ConnectionOur Transformer w Learnable Scaling Figure 2. Residual connection in the vanilla Transformer tends to
destroy the original similarity relationship on content structure in
style transfer task. Our model is designed to address this problem
with learnable scaling parameters. The top row shows a simple 2D
visualization and the bottom one provides a qualitative example.
4 layers, vanilla Transformer even fails to get convergent
in training, which limits the scalability of the Transformer
model in style transfer.
Moreover, vanilla Transformer relies on residual connec-
tions [12] to stylize content features, which suffers from the
content-distortion problem. We illustrate this effect with
a 2D visualization in Fig. 2(top), where residual connec-
tions lead the transformation results of two content feature
vectors to move towards the dominated style features and
thereby tend to eliminate their original distinction. The vi-
sual effect is that the stylized images would be dominated
by some strong style features, such as salient edges, with the
original self-(dis)similarity of content structures destroyed,
as the example shown in Fig. 2(bottom).
Focusing on these drawbacks, in this paper, we are ded-
icated to devising a novel Transformer architecture specif-
ically for artistic style transfer. On the one hand, in the
proposed model, different Transformer layers share a com-
mon group of parameters and a random number of stacked
layers are adopted for each training iteration. Compared
with the original version, sharing parameters across differ-
ent layers reduces the total number of parameters signifi-
cantly and leads to more convenient training convergence.
As a byproduct, it is also readily for our model to control the
degree of stylization via tuning the number of stacked layers
freely in the inference time, as shown in Fig. 1(right). On
the other hand, we equip Transformer with learnable scale
parameters for content-style interactions instead of residual
connections, which alleviates content distortion to a large
degree and better preserves content structures while render-
ing vivid style patterns simultaneously, as shown in the 2D
visualization and the qualitative example in Fig. 2.
Furthermore, beyond the typical zero-shot arbitrary style
transfer, leveraging a meta learning scheme, our method
is adaptable to the few-shot setting. By only fine-tuning
the Transformer encoder layer in the few-shot stage, rapidadaptation to the model for a specific style within a lim-
ited number of updates is possible, where the stylization
with only 1 layer can be further improved, as shown in
Fig. 1. Beyond that, we first achieve text-guided few-
shot style transfer with this framework, which largely al-
leviates the training burden of previous per-text-per-model
solution. In this sense, we term the overall pipeline Meta
StyleTransform er(Master ). Our contributions are summa-
rized as follows:
• We propose a novel Transformer architecture specif-
ically for artistic style transfer. It shares parameters
between different layers, which not only helps training
convergence, but also allows convenient control over
the stylization effect.
• We identity the content distortion problem of resid-
ual connections in Transformer and propose learnable
scale parameters as an option to alleviate the problem.
• We introduce a meta learning framework for adapting
original training setting of zero-shot style transfer to
the few-shot scenario, with which our Master achieves
very good trade-off between flexibility and quality.
• Experiments show that our model achieves results bet-
ter than those of arbitrary-style-per-model methods.
Furthermore, under the few-shot setting, either condi-
tioned on image or text, Master can even yield perfor-
mance on par with that of per-style-per-model methods
with significantly less training cost.
|
Suhail_Omnimatte3D_Associating_Objects_and_Their_Effects_in_Unconstrained_Monocular_Video_CVPR_2023 | Abstract
We propose a method to decompose a video into a back-
ground and a set of foreground layers, where the background
captures stationary elements while the foreground layers
capture moving objects along with their associated effects
(e.g. shadows and reflections). Our approach is designed for
unconstrained monocular videos, with an arbitrary camera
and object motion. Prior work that tackles this problem
assumes that the video can be mapped onto a fixed 2D can-
vas, severely limiting the possible space of camera motion.
Instead, our method applies recent progress in monocular
camera pose and depth estimation to create a full, RGBD
video layer for the background, along with a video layer
for each foreground object. To solve the underconstrained
decomposition problem, we propose a new loss formulation
based on multi-view consistency. We test our method on
challenging videos with complex camera motion and show
significant qualitative improvement over current approaches.
| 1. Introduction
Decomposing a video into meaningful layers (as show in
Fig. 1) is a long-standing and complex problem [42] that hasseen a recent surge in progress with the application of deep
neural networks [17, 24, 25, 46]. A challenging variant of
layer decomposition is the omnimatte [25] task, which aims
to separate an input video into a background and multiple
foreground layers, each containing an object of interest along
with its correlated effects such as shadows and reflections,
thus enabling video editing applications such as object re-
moval, background replacement and retiming. The original
work on omnimatte [24, 25] introduced a self-supervised ap-
proach for decomposing a video by assuming the background
can be unwrapped onto a single static 2D background canvas
using homography warping. Following work [17,46] relaxed
the homography restriction, but maintained the necessity of
unwrapping onto a 2D canvas.
This 2D modelling, however, limits the applicability of
these methods to camera motions that have limited or zero
parallax; intuitively, only panning camera motions are al-
lowed. If the camera’s center of projection moves a sig-
nificant distance over the video, 2D methods are unable to
learn an accurate background and are forced to place the
background detail in a foreground layer (Figure 2). Since
camera parallax is quite common, 2D methods are severely
restricted in practice.
To handle camera parallax, we exploit another recent line
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
630
Background RGBForeground RGB
Foreground Alpha
OmnimatteOurNeural AtlasFigure 2. Existing methods fail on scenes with parallax. Top
row: Omnimatte [25] fails due to inaccurate homography regis-
tration, reconstructing the majority of the video in the foreground
layer. Middle row: Neural-Atlas [17] incorrectly places trees in
the foreground layer and struggles to inpaint the person region.
Bottom row: Our method obtains a clean background and captures
the person and their shadow in the foreground layer.
of work on estimating 3D camera position and depth maps
from casually-captured video with moving objects [18, 26,
48, 49]. Instead of a 2D layer decomposition, we propose
to learn a 3D background model that varies per-frame and
includes inpainted RGB and depth in the regions occluded
by the foreground. Allowing the background to vary per-
frame, rather than assuming a global, static canvas, creates an
ill-posed decomposition problem, as variation in each input
pixel could be explained by a foreground object, background,
or both. To resolve this ambiguity, we enforce that the
background varies slowly over time with 3D multi-view
constraints using dense depth and camera pose estimates [48].
As shown in Fig. 2, our model can successfully separate the
layers obtaining a clean background RGB whilst capturing
only the person and their shadow in the foreground layer.
Contributions. Our main contribution is a novel video
decomposition method capable of accurately separating an
input video with complex object and camera motion ( e.g.,
parallax) into a background and object layers. To address
the ill-posed decomposition problem, we design regulariza-
tion terms based on multi-view consistency and smoothness
priors. The resulting model produces clean decompositions
of real-world videos where previous methods fail.
|
Sun_Regularizing_Second-Order_Influences_for_Continual_Learning_CVPR_2023 | Abstract
Continual learning aims to learn on non-stationary data
streams without catastrophically forgetting previous knowl-
edge. Prevalent replay-based methods address this chal-
lenge by rehearsing on a small buffer holding the seen data,
for which a delicate sample selection strategy is required.
However, existing selection schemes typically seek only to
maximize the utility of the ongoing selection, overlooking
the interference between successive rounds of selection.
Motivated by this, we dissect the interaction of sequential
selection steps within a framework built on influence func-
tions. We manage to identify a new class of second-order
influences that will gradually amplify incidental bias in the
replay buffer and compromise the selection process. To reg-
ularize the second-order effects, a novel selection objective
is proposed, which also has clear connections to two widely
adopted criteria. Furthermore, we present an efficient im-
plementation for optimizing the proposed criterion. Exper-
iments on multiple continual learning benchmarks demon-
strate the advantage of our approach over state-of-the-art
methods. Code is available at https://github.com/
feifeiobama/InfluenceCL .
| 1. Introduction
The ability to continually accumulate knowledge is a
hallmark of intelligence, yet prevailing machine learning
systems struggle to remember old concepts after acquir-
ing new ones. Continual learning has hence emerged to
tackle this issue, also known as catastrophic forgetting ,
and thereby enable the learning on long task sequences
which are frequently encountered in real-world applica-
tions [15, 17]. Amongst a variety of valid methods, the
replay-based approaches [33, 39] achieve evidently strong
results by buffering a small coreset of previously seen data
for rehearsal in later stages. Due to the rigorous constraints
on memory overhead, the replay buffer needs to be carefully
maintained such that only the samples deemed most critical
*Corresponding author.
CoresetNew dataSelectionSelectionInfluenceCoresetNew dataCoresetNew data(a)
Greedy selectionDegraded selectionRegularize
Oracle selectionNow(b)Figure 1. (a) In continual learning, earlier coreset selection exerts
a profound influence on subsequent steps through the data flow.
(b) By ignoring this, a greedy selection strategy can degrade over
time. Therefore, we propose to model and regularize the influence
of each selection on the future.
for preserving overall performance are selected during the
learning procedure.
Despite much effort in sophisticated design of coreset
selection criteria [3,7,47,52], existing works are mostly de-
veloped for an oversimplified scenario, where each round
of selection is considered isolated and primarily focused on
refining single-round performance. For example, Borsos et
al. [7] greedily minimize the empirical risk on the currently
available subset of data. However, as shown in Fig. 1a, the
actual selection process within continual learning is rather
different in that each prior selection defines the input for
subsequent selection and therefore has an inevitable impact
on the later decision. Neglecting such interactions between
successive selection steps, as in previous practice, may re-
sult in a degraded coreset after the prolonged selection pro-
cess. To maximize overall performance, an ideal continual
learning method should take into account the future influ-
ence of each round of selection, which remains unresolved
due to the obscure role of intermediate selection results.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20166
This work models the interplay among multiple selection
steps with the classic tool of influence functions [21, 29],
which estimates the effect of each training point by perturb-
ing sample weights. Similarly, we begin by upweighting
two samples from consecutive time steps, and uncover that
the latter one’s influence evaluation is biased due to the con-
sequent perturbations in test gradient and coreset Hessian.
This gives rise to a new class of second-order influences
that interfere with the initial influence-based selection strat-
egy. To be specific, the selection tends to align across dif-
ferent rounds, favoring those samples with a larger gradient
projection along the previously upweighted ones. It further
suggests that some unintended adverse effects of the early
selection steps, such as incidental bias introduced into the
replay buffer, will be amplified after rounds of selection and
impair the final performance.
To address the newly discovered disruptive effects, we
propose to regularize each round of selection beforehand
based on its expected second-order influence, as illustrated
in Fig. 1b. Intuitively, the selection with a large magnitude
of second-order influence will substantially interfere with
future influence estimates, and therefore should be avoided.
However, the magnitude itself cannot be precalculated for
direct guidance of the selection, since it is related to some
unknown future data. We instead derive a tractable upper
bound for the magnitude which results in the form of ℓ2-
norm, and integrated it with the vanilla influence functions
to serve as the final selection criterion. The proposed regu-
larizer can be interpreted with clarity as it equates to a cou-
ple of existing criteria ranging from gradient matching to
diversity, under varying simplifications. Finally, we present
an efficient implementation that tackles the technical chal-
lenges in selecting with neural networks.
Our contributions are summarized as below:
• We investigate the previously-neglected interplay be-
tween consecutive selection steps within an influence-
based selection framework. A new type of second-
order influences is identified, and further analysis
states its harmfulness in continual learning.
• A novel regularizer is proposed to mitigate the second-
order interference. The regularizer is associated with
two other popular selection criteria and can be effi-
ciently optimized with our implementation.
• Comprehensive experiments on three continual learn-
ing benchmarks (Split CIFAR-10, Split CIFAR-100
and Split miniImageNet) clearly illustrates that our
method achieves new state-of-the-art performance.
|
Tang_NeuMap_Neural_Coordinate_Mapping_by_Auto-Transdecoder_for_Camera_Localization_CVPR_2023 | Abstract
This paper presents an end-to-end neural mapping
method for camera localization, encoding a whole scene
into a grid of latent codes, with which a Transformer-
based auto-decoder regresses 3D coordinates of query pix-
els. State-of-the-art camera localization methods require
each scene to be stored as a 3D point cloud with per-point
features, which takes several gigabytes of storage per scene.
While compression is possible, the performance drops sig-
nificantly at high compression rates. NeuMap achieves ex-
tremely high compression rates with minimal performance
drop by using 1) learnable latent codes to store scene in-
formation and 2) a scene-agnostic Transformer-based auto-
decoder to infer coordinates for a query pixel. The scene-
agnostic network design also learns robust matching priors
by training with large-scale data, and further allows us to
just optimize the codes quickly for a new scene while fixing
the network weights. Extensive evaluations with five bench-
marks show that NeuMap outperforms all the other coor-dinate regression methods significantly and reaches similar
performance as the feature matching methods while hav-
ing a much smaller scene representation size. For example,
NeuMap achieves 39.1% accuracy in Aachen night bench-
mark with only 6MB of data, while other compelling meth-
ods require 100MB or a few gigabytes and fail completely
under high compression settings. The codes are available
at https://github.com/Tangshitao/NeuMap.
| 1. Introduction
Visual localization determines camera position and ori-
entation based on image observations, an essential task for
applications such as VR/AR and self-driving cars. Despite
significant progress, accurate visual localization remains a
challenge, especially when dealing with large viewpoint and
illumination changes. Compact map representation is an-
other growing concern, as applications like delivery robots
may require extensive maps. Standard visual localization
techniques rely on massive databases of keypoints with 3D
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
929
coordinates and visual features, posing a significant bottle-
neck in real-world applications.
Visual localization techniques generally establish 2D-
3D correspondences and estimate camera poses using
perspective-n-point (PnP) [22] with a sampling method like
RANSAC [15]. These methods can be divided into two cat-
egories: Feature Matching (FM) [11, 34, 37, 49] and Scene
Coordinate Regression (SCR) [3, 5, 7, 40]. FM methods,
which are trained on a vast amount of data covering vari-
ous viewpoint and illumination differences, use sparse ro-
bust features extracted from the query image and matched
with those in candidate scene images. This approach ex-
ploits learning-based feature extraction and correspondence
matching methods [13, 35, 42, 44] to achieve robust local-
ization. However, FM methods require large maps, mak-
ing them impractical for large-scale scenes. Many meth-
ods [11, 27, 49] have been proposed to compress this map
representation, but often at the cost of degraded perfor-
mance. On the other hand, SCR methods directly regress
a dense scene coordinate map using a compact random for-
est or neural network, providing accurate results for small-
scale indoor scenes. However, their compact models lack
generalization capability and are often restricted to lim-
ited viewpoint and illumination changes. Approaches such
as ESAC [5] handle larger scenes by dividing them into
smaller sub-regions, but still struggle with large viewpoint
and illumination changes.
We design our method to enjoy the benefits of compact
scene representation of SCR methods and the robust per-
formance of FM methods. Similar to FM methods, we
also focus on a sparse set of robustly learned features to
deal with large viewpoint and illumination changes. On
the other hand, we exploit similar ideas to SCR methods
to regress the 3D scene coordinates of these sparse fea-
tures in the query images with a compact map represen-
tation. Our method, dubbed neural coordinate mapping
(NeuMap), first extracts robust features from images and
then applies a transformer-based auto-decoder (i.e., auto-
transdecoder) to learn: 1) a grid of scene codes encoding
the scene information (including 3D scene coordinates and
feature information) and 2) the mapping from query im-
age feature points to 3D scene coordinates. At test time,
given a query image, after extracting image features of its
key-points, the auto-transdecoder regresses their 3D coor-
dinates via cross attention between image features and la-
tent codes. In our method, the robust feature extractor and
the auto-transdecoder are scene-agnostic, where only latent
codes are scene specific. This design enables the scene-
agnostic parameters to learn matching priors across scenes
while maintaining a small data size. To handle large scenes,
we divide the scene into smaller sub-regions and process
them independently while applying a network pruning tech-
nique [25] to drop redundant codes.We demonstrate the effectiveness of NeuMap with a di-
verse set of five benchmarks, ranging from indoor to out-
door and small to large-scale: 7scenes (indoor small), Scan-
Net (indoor small), Cambridge Landmarks (outdoor small),
Aachen Day &Night (outdoor large), and NA VER LABS
(indoor large). In small-scale datasets (i.e., 7scenes and
Cambridge Landmarks), NeuMap compresses the scene
representation by around 100-1000 times without any per-
formance drop compared to DSAC++. In large-scale
datasets (i.e., Aachen Day &Night and NA VER LABS),
NeuMap significantly outperforms the current state-of-the-
art at high compression settings, namely, HLoc [34] with a
scene-compression technique [49]. In ScanNet dataset, we
demonstrate the quick fine-tuning experiments, where we
only optimize the codes for a new scene while fixing the
scene-agnostic network weights.
|
Tang_Parts2Words_Learning_Joint_Embedding_of_Point_Clouds_and_Texts_by_CVPR_2023 | Abstract
Shape-Text matching is an important task of high-level
shape understanding. Current methods mainly represent a
3D shape as multiple 2D rendered views, which obviously
can not be understood well due to the structural ambigu-
ity caused by self-occlusion in the limited number of views.
To resolve this issue, we directly represent 3D shapes as
point clouds, and propose to learn joint embedding of point
clouds and texts by bidirectional matching between parts
from shapes and words from texts. Specifically, we first seg-
ment the point clouds into parts, and then leverage optimal
transport method to match parts and words in an optimized
feature space, where each part is represented by aggregat-
ing features of all points within it and each word is ab-
stracted by its contextual information. We optimize the fea-
ture space in order to enlarge the similarities between the
paired training samples, while simultaneously maximizing
the margin between the unpaired ones. Experiments demon-
strate that our method achieves a significant improvement
in accuracy over the SOTAs on multi-modal retrieval tasks
under the Text2Shape dataset. Codes are available at here.
| 1. Introduction
Interaction scenarios, such as metaverse, and computer-
aided design (CAD), create a larger number of 3D shapes
and text descriptions. To enable a more intelligent process
of interaction, it is important to bridge the gap between 3D
data and linguistic data. Recently, 3D shapes with rich geo-
metric details have been available in large-scale 3D deep
learning benchmark datasets [5, 34]. Beyond 3D shapes
themselves, text descriptions can also provide additional in-
formation. However, it is still hard to jointly understand 3D
shapes and texts, since representing different modalities in
a common semantic space is still very challenging.
The existing methods aim at learning a joint embedded
*Corresponding authors.space to connect various 3D representations with texts, such
as voxel grids [6] and multi-view rendered images [11, 12].
However, due to the low resolution and self-occlusions, it
is hard for those methods mentioned above to improve the
ability of joint understanding of shapes and texts. On the
other hand, previous shape-text matching methods [6, 12,
37] usually take the global features of the entire 3D shape
for text matching, making it challenging to capture the local
geometries, and thus are not suitable for matching detailed
geometric descriptions.
Regional-based matching approaches are commonly
employed in the image-text matching task [21–23, 27],
whereby visual-text alignment is established at the seman-
tic level to enhance the performance of retrieval. These
models compute the local similarities between regions and
words and then aggregate the local information to obtain
the global metrics between the heterogeneous pairs. How-
ever, these two-stage methods based on the pre-trained seg-
mentation networks split the connection between matching
embeddings and segmentation prior information.
In this paper, we introduce an optimal transport based
shape-text matching method to achieve fine-grained align-
ment and retrieval of 3D shapes and texts, as shown in Fig-
ure 1. To mitigate the influence of low-resolution or self-
occlusions, we directly represent the shape as point clouds
and learn a part-level segmentation prior. Afterward, we
leverage optimal transport to build the regional cross-modal
correspondences and achieve more precise retrieval results.
Our main contributions are summarized as follows:
• We propose a novel end-to-end network framework
to learn the joint embedding of point clouds and
texts, which enables the bidirectional matching be-
tween parts from point clouds and words from texts.
• We leverage optimal transport theory to obtain the
best matches between parts and words and incorporate
Earth Mover’s Distance (EMD) to describe the match-
ing score.
• To the best of our knowledge, our proposed network
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6884
Figure 1. Comparison between the global-based matching method and our proposed method. The proposed end-to-end framework aims to
learn the joint embedding of point clouds and text by matching parts to words. It can either retrieve shapes using text or vice versa. Our
novelty lies in the way of jointly learning embeddings of point clouds and texts.
achieves SOTA results in joint 3D shape/text under-
standing tasks in terms of various evaluation metrics.
|
Uy_SCADE_NeRFs_from_Space_Carving_With_Ambiguity-Aware_Depth_Estimates_CVPR_2023 | Abstract
Neural radiance fields (NeRFs) have enabled high fidelity
3D reconstruction from multiple 2D input views. However,
a well-known drawback of NeRFs is the less-than-ideal per-
formance under a small number of views, due to insufficient
constraints enforced by volumetric rendering. To address this
issue, we introduce SCADE, a novel technique that improves
NeRF reconstruction quality on sparse, unconstrained in-
put views for in-the-wild indoor scenes. To constrain NeRF
reconstruction, we leverage geometric priors in the form
of per-view depth estimates produced with state-of-the-art
monocular depth estimation models, which can generalize
across scenes. A key challenge is that monocular depth es-
timation is an ill-posed problem, with inherent ambiguities.
To handle this issue, we propose a new method that learns
to predict, for each view, a continuous, multimodal distribu-
tion of depth estimates using conditional Implicit Maximum
Likelihood Estimation (cIMLE). In order to disambiguate
exploiting multiple views, we introduce an original space
carving loss that guides the NeRF representation to fuse
multiple hypothesized depth maps from each view and distill
from them a common geometry that is consistent with all
views. Experiments show that our approach enables higher
fidelity novel view synthesis from sparse views. Our project
page can be found at scade-spacecarving-nerfs.github.io. | 1. Introduction
Neural radiance fields (NeRF) [24] have enabled high
fidelity novel view synthesis from dozens of input views.
Such number of views are difficult to obtain in practice,
however. When given only a small number of sparse views,
vanilla NeRF tends to struggle with reconstructing shape
accurately, due to inadequate constraints enforced by the
volume rendering loss alone.
Shape priors can help remedy this problem. Various forms
of shape priors have been proposed for NeRFs, such as object
category-level priors [13], and handcrafted data-independent
priors [25]. The former requires knowledge of category
labels and is not applicable to scenes, and the latter is agnos-
tic to the specifics of the scene and only encodes low-level
regularity (e.g., local smoothness). A form of prior that
is both scene-dependent and category-agnostic is per-view
monocular depth estimates, which have been explored in
prior work [6,32]. Unfortunately, monocular depth estimates
are often inaccurate, due to estimation errors and inherent
ambiguities, such as albedo vs. shading (cf. check shadow
illusion), concavity vs. convexity (cf. hollow face illusion),
distance vs. scale (cf. miniature cinematography), etc. As a
result, the incorrect or inconsistent priors imposed by such
depth estimates may mislead the NeRF into reconstructing
incorrect shape and produce artifacts in the generated views.
In this paper, we propose a method that embraces the
uncertainty and ambiguities present in monocular depth esti-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16518
mates, by modelling a probability distribution over depth es-
timates. The ambiguities are retained at the stage of monocu-
lar depth estimation, and are only resolved once information
from multiple views are fused together. We do so with a prin-
cipled loss defined on probability distributions over depth
estimates for different views. This loss selects the subset of
modes of probability distributions that are consistent across
all views and matches them with the modes of the depth
distribution along different rays as modelled by the NeRF.
It turns out that this operation can be interpreted as a proba-
bilistic analogue of classical depth-based space carving [17].
For this reason, we dub our method Space Carving with
Ambiguity-aware Depth Estimates , or SCADE for short.
Compared to prior approaches of depth supervision [6,32]
that only supervise the moments of NeRF’s depth distribu-
tion (rather than the modes), our key innovation is that we
supervise the modes of NeRF’s depth distribution. The su-
pervisory signal provided by the former is weaker than the
latter, because the former only constrains the value of an in-
tegral aggregated along the ray, whereas the latter constrains
the values at different individual points along the ray. Hence,
the supervisory signal provided by the former is 2D (because
it integrates over a ray), whereas the supervisory signal pro-
vided by the our method is 3D (because it is point-wise) and
thus can be more fine-grained.
An important technical challenge is modelling probability
distributions over depth estimates. Classical approaches use
simple distributions with closed-form probability densities
such as Gaussian or Laplace distributions. Unfortunately
these distributions are not very expressive, since they only
have a single mode (known as “unimodal”) and have a fixed
shape for the tails. Since each interpretation of an ambiguous
image should be a distinct mode, these simple unimodal dis-
tributions cannot capture the complex ambiguities in depth
estimates. Naïve extensions like a mixture of Gaussians are
also not ideal because some images are more ambiguous than
others, and so the number of modes needed may differ for the
depth estimates of different images. Moreover, learning such
a mixture requires backpropagating through E-M, which is
nontrivial. Any attempt at modifying the probability density
to make it capable of handling a variable number of modes
can easily run into an intractable partition function [4,10,31],
which makes learning difficult because maximum likelihood
requires the ability to evaluate the probability density, which
is a function of the partition function.
To get around this conundrum, we propose representing
the probability distribution with a set of samples generated
from a neural network. Such a distribution can be learned
with a conditional GAN; however, because GANs suffer
from mode collapse, they cannot model multiple modes [11]
and are therefore unsuited to modelling ambiguity. Instead,
we propose leveraging conditional Implicit Maximum Like-
lihood Estimation (cIMLE) [19, 28] to learn the distribution,
which is designed to avoid mode collapse.
We consider the challenging setting of leveraging out-of-
Figure 2. Ambiguities from a single image . We show samples
from our ambiguity-aware depth estimates that is able to handle
different types of ambiguities. (Top) An image of a chair taken
from the top-view. Without context of the scene, it is unclear that
it is an image of a chair. We show different samples from our
ambiguity-aware depth estimates that are able to capture different
degrees of convextiy. (Bottom) An image of a cardboard under bad
lighting conditions that capture the albedo vs shading ambiguity
that is also represented by our different samples.
domain depth priors to train NeRFs on real-world indoor
scenes with sparse views. Under this setting, the depth priors
we use are trained on a different dataset (e.g., Taskonomy)
from the scenes our NeRFs are trained on (e.g., ScanNet).
This setting is more challenging than usual due to domain
gap between the dataset the prior is trained on and the scenes
NeRF is asked to reconstruct. We demonstrate that our
method outperforms vanilla NeRF and NeRFs with supervi-
sion from depth-based priors in novel view synthesis.
In summary, our key contributions include:
•An method that allows the creation of NeRFs in uncon-
strained indoor settings from only a modest number of
2D views by introducing a sophisticated way to exploit
ambiguity-aware monocular depth estimation.
•A novel way to sample distributions over image depth
estimates based on conditional Implicit Maximum Like-
lihood Estimation that can represent depth ambiguities
and capture a variable number of discrete depth modes.
•A new space-carving loss that can be used in the NeRF
formulation to optimize for a mode-seeking 3D den-
sity that helps select consistent depth modes across the
views and thus compensate for the under-constrained
photometric information in the few view regime.
|
Tan_Temporal_Attention_Unit_Towards_Efficient_Spatiotemporal_Predictive_Learning_CVPR_2023 | Abstract
Spatiotemporal predictive learning aims to generate fu-
ture frames by learning from historical frames. In this pa-
per, we investigate existing methods and present a general
framework of spatiotemporal predictive learning, in which
the spatial encoder and decoder capture intra-frame fea-
tures and the middle temporal module catches inter-frame
correlations. While the mainstream methods employ recur-
rent units to capture long-term temporal dependencies, they
suffer from low computational efficiency due to their unpar-
allelizable architectures. To parallelize the temporal mod-
ule, we propose the Temporal Attention Unit (TAU), which
decomposes temporal attention into intra-frame statical at-
tention and inter-frame dynamical attention. Moreover,
while the mean squared error loss focuses on intra-frame
errors, we introduce a novel differential divergence regu-
larization to take inter-frame variations into account. Ex-
tensive experiments demonstrate that the proposed method
enables the derived model to achieve competitive perfor-
mance on various spatiotemporal prediction benchmarks.
| 1. Introduction
The last decade has witnessed revolutionary advances in
deep learning across various supervised learning tasks such
as image classification [34,52,87], object detection [73,75],
computational biology [21, 22, 43, 83, 84], and etc. Despite
significant breakthroughs in supervised learning, which re-
lies on large-scale labeled datasets, the potential of unsuper-
vised learning remains largely untapped. Self-supervised
learning that designs pretext tasks to produce labels derived
from the data itself is recognized as a subset of unsupervised
learning. In the context of self-supervised learning, con-
trastive self-supervised learning [7,8,28,33,85,86,92,104]
predicts the noise contrastive estimation from predefined
positive or negative pairs, and masked self-supervised learn-
ing[17, 32, 45, 51, 57, 102] predicts the masked patches
from the visible patches. Unlike these image-level self-
*Equal contribution.
†Corresponding author.supervised learning, spatiotemporal predictive learning that
predicts future frames from past frames at the video-
level [6, 19, 27, 46, 60, 63].
Spatial EncoderSpatial DecoderTemporalModuleConvLSTMConvLSTMConvLSTM3D Conv3D ConvE3D-LSTM
ConvConvConvLSTMConvConvTAUPhyCellInvertibleFlowReverseFlowST-LSTM(a) General framework(b) ConvLSTM(c) E3D-LSTM
(d) PhyDNet(e) CrevNet(f) Ours
Figure 1. Comparison of model architectures among common spa-
tiotemporal predictive learning methods. Note that we denote 2D
convolutional neural networks as Conv while 3D Conv means 3D
convolutional neural networks.
Accurate spatiotemporal predictive learning can bene-
fit broad practical applications in climate change [74, 77],
human motion forecasting [91, 107], traffic flow predic-
tion [18, 97], and representation learning [39, 71]. The
significance of spatiotemporal predictive learning primarily
lies in its potential of exploring both spatial correlation and
temporal evolution in the physical world. Moreover, the
self-supervised nature of spatiotemporal predictive learn-
ing aligns well with human learning styles without a large
amount of labeled data. Massive videos can provide a rich
source of visual information, enabling spatiotemporal pre-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18770
dictive learning to serve as a generative pre-training strat-
egy [35, 60] for feature representation learning towards di-
verse downstream visual supervised tasks.
Most of the existing methods [2,3,9,11,12,19,24,29,41,
44,65,78,79,82,88,89,93–96,98,100,103] in spatiotempo-
ral predictive learning employ hybrid architectures of con-
volutional neural networks and recurrent units in which spa-
tial correlation and time evolution can be learned, respec-
tively. Inspired by the success of long short-term mem-
ory (LSTM) [36] in sequential modeling, ConvLSTM [78]
is a seminal work on the topic of spatiotemporal predic-
tive learning that extends fully connected LSTM to con-
volutional LSTM towards accurate precipitation nowcast-
ing. PredRNN [95] is an admirable work that proposes
Spatiotemporal LSTM (ST-LSTM) units to model spatial
appearances and temporal variations in a unified memory
pool. This work provides insights on designing typical re-
current units for spatiotemporal predictive learning and in-
spires a series of subsequent works [4, 93, 96, 98]. E3D-
LSTM [94] integrates 3D convolutional neural networks
into recurrent units towards good representations with both
short-term frame dependencies and long-term high-level re-
lations. PhyDNet [29] introduces a two-branch architec-
ture that involves physical-based PhyCells and ConvLSTMs
for performing partial differential equation constraints in
latent space. CrevNet [103] proposes an invertible two-
way autoencoder based on flow [14, 15] and a condition-
ally reversible architecture for spatiotemporal predictive
learning. As shown in Figure 1 (a), we present a gen-
eral framework consisting of the spatial encoder/decoder
and the middle temporal module, abstracted from these
methods. Though these spatiotemporal predictive learning
methods have different temporal modules and spatial en-
coders/decoders, they basically share a similar framework.
Based on the general framework, we argue that the tem-
poral module plays an essential role in spatiotemporal pre-
dictive learning. While the common choice of the tempo-
ral module is the recurrent-based units, we explore a novel
parallelizable attention module named Temporal Attention
Unit (TAU) to capture time evolution. The proposed tempo-
ral attention is decomposed into intra-frame statical atten-
tion and inter-frame dynamical attention. Furthermore, we
argue that the mean square error loss only focuses on intra-
frame differences, and we propose a differential divergence
regularization that also cares about the inter-frame varia-
tions. Keeping the spatial encoder and decoder as simple as
2D convolutional neural networks, we deliberately imple-
ment our proposed TAU modules and surprisingly find the
derived model achieves competitive performance as those
recurrent-based models. This observation provides a new
perspective to improve spatiotemporal predictive learning
by parallelizable attention networks instead of common-
used recurrent units.We conduct experiments on various datasets with dif-
ferent experimental settings: (1) Standard spatiotemporal
predictive learning. Quantitative results on various datasets
demonstrate our proposed method can achieve competitive
performance on standard spatiotemporal predictive learn-
ing. (2) Generalization ability. To verify the generaliza-
tion ability, we train our model on KITTI and test it on the
Caltech Pedestrian dataset with different domains. (3) Pre-
dict future frames with flexible lengths. We tackle the long-
length frames by feeding the predicted frames as the input
and find the performance is consistently well. Through the
superior performance in the above three experimental set-
tings, we demonstrate that our proposed model can provide
a novel manner that learns temporal dependencies without
recurrent units.
|
Tejero_Full_or_Weak_Annotations_An_Adaptive_Strategy_for_Budget-Constrained_Annotation_CVPR_2023 | Abstract
Annotating new datasets for machine learning tasks is
tedious, time-consuming, and costly. For segmentation ap-
plications, the burden is particularly high as manual delin-
eations of relevant image content are often extremely expen-
sive or can only be done by experts with domain-specific
knowledge. Thanks to developments in transfer learning
and training with weak supervision, segmentation models
can now also greatly benefit from annotations of different
kinds. However, for any new domain application looking
to use weak supervision, the dataset builder still needs to
define a strategy to distribute full segmentation and other
weak annotations. Doing so is challenging, however, as it
is a priori unknown how to distribute an annotation budget
for a given new dataset. To this end, we propose a novel ap-
proach to determine annotation strategies for segmentation
datasets, whereby estimating what proportion of segmen-
tation and classification annotations should be collected
given a fixed budget. To do so, our method sequentially
determines proportions of segmentation and classification
annotations to collect for budget-fractions by modeling the
expected improvement of the final segmentation model. We
show in our experiments that our approach yields annota-
tions that perform very close to the optimal for a number of
different annotation budgets and datasets.
| 1. Introduction
Semantic segmentation is a fundamental computer vi-
sion task with applications in numerous domains such as
autonomous driving [11, 43], scene understanding [45],
surveillance [50] and medical diagnosis [9, 18]. As the ad-
vent of deep learning has significantly advanced the state-
of-the-art, many new application areas have come to lightand continue to do so too. This growth has brought and
continues to bring exciting domain-specific datasets for seg-
mentation tasks [6, 19, 29, 32, 52].
Today, the process of establishing machine learning-
based segmentation models for any new application is rel-
atively well understood and standard. Only once an image
dataset is gathered and curated, can machine learning mod-
els be trained and validated. In contrast, building appropri-
ate datasets is known to be difficult, time-consuming, and
yet paramount. Beyond the fact that collecting images can
be tedious, a far more challenging task is producing ground-
truth segmentation annotations to subsequently train (semi)
supervised machine learning models. This is mainly be-
cause producing segmentation annotations often remains a
manual task. As reported in [4], generating segmentation
annotations for a single PASCAL image [15] takes over
200 seconds on average. This implies over 250 hours of
annotation time for a dataset containing a modest 5’000
images. What often further exacerbates the problem for
domain-specific datasets is that only the dataset designer,
or a small group of individuals, have enough expertise to
produce the annotations ( e.g., doctors, experts, etc.), mak-
ing crowd-sourcing ill-suited.
To overcome this challenge, different paradigms have
been suggested over the years. Approaches such as Active
Learning [7, 8, 26] aim to iteratively identify subsets of im-
ages to annotate so as to yield highly performing models.
Transfer learning has also proved to be an important tool
in reducing annotation tasks [13, 17, 24, 25, 30, 36]. For in-
stance, [37] show that training segmentation models from
scratch is often inferior to using pre-training models de-
rived from large image classification datasets, even when
the target application domain differs from the source do-
main. Finally, weakly-supervised methods [2, 40] combine
pixel-wise annotations with other weak annotations that are
faster to acquire, thereby reducing the annotation burden.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11381
OCTPASCAL VOCSUIMCityscapesFigure 1. Illustration of different semantic segmentation applications; OCT: Pathologies of the eye in OCT images, SUIM: Underwater
scene segmentation [19], Cityscape: street level scene segmentation [11], PASCAL VOC: natural object segmentation.
In particular, Papandreou et al. [40] showed that combina-
tions of strong and weak annotations ( e.g., bounding boxes,
keypoints, or image-level tags) delivered competitive results
with a reduced annotation effort. In this work, we rely on
these observations and focus on the weakly supervised seg-
mentation setting.
In the frame of designing annotation campaigns, weakly-
supervised approaches present opportunities for efficiency
as well. Instead of completely spending a budget on a few
expensive annotations, weakly-supervised methods allow a
proportion of the budget to be allocated to inexpensive, or
weak, labels. That is, one could spend the entire annotation
budget to manually segment available images, but would
ultimately lead to relatively few annotations. Conversely,
weak annotations such as image-level labels are roughly
100 times cheaper to gather than their segmentation coun-
terparts [4]. Thus, a greater number of weakly-annotated
images could be used to train segmentation models at an
equal cost. In fact, under a fixed budget, allocating a pro-
portion of the budget to inexpensive image-level class labels
has been shown to yield superior performance compared to
entirely allocating a budget to segmentation labels [4].
Yet, allocating how an annotation budget should be dis-
tributed among strong and weak annotations is challenging,
and inappropriate allocations may severely impact the qual-
ity of the final segmentation model. For example, spend-
ing the entire budget on image-level annotations will clearly
hurt the performance of a subsequent segmentation model.
Instead, a naive solution would be to segment and classify
a fixed proportion of each ( e.g., say 80% - 20%). Knowing
what proportion to use for a given dataset is unclear, how-
ever. Beyond this, there is no reason why the same fixed
proportion would be appropriate across different datasets or
application domains. That is, it would be highly unlikelythat the datasets shown in Fig. 1 all require the same pro-
portion of strong and weak annotations to yield optimal seg-
mentation models.
Despite its importance, choosing the best proportion
of annotation types remains a largely unexplored research
question. Weakly-supervised and transfer-learning meth-
ods generally assume that the annotation campaign and the
model training are independent and that all annotations are
simply available at training time. While active learning
methods do alternate between annotation and training, they
focus on choosing optimal samples to annotate rather than
choosing the right type of annotations. Moreover, most ac-
tive learning methods ignore constraints imposed by an an-
notation budget. More notable, however, is the recent work
of Mahmood et. al. [33, 34] which aims to determine what
weak and strong annotation strategy is necessary to achieve
a target performance level. While noteworthy, this objective
differs from that here, whereby given a fixed budget, what
strategy is best suited for a given new dataset?
To this end, we propose a novel method to find an opti-
mal budget allocation strategy in an online manner. Using a
collection of unlabeled images and a maximum budget, our
approach selects strong and weak annotations, constrained
by a given budget, that maximize the performance of the
subsequent trained segmentation model. To do this, our
method iteratively alternates between partial budget alloca-
tions, label acquisition, and model training. At each step,
we use the annotations performed so far to train multiple
models to estimate how different proportions of weak and
strong annotations affect model performance. A Gaussian
Process models these results and maps the number of weak
and strong annotations to the expected model improvement.
Computing the Pareto optima between expected improve-
ment and costs, we choose a new sub-budget installment
11382
and its associated allocation so to yield the maximum ex-
pected improvement. We show in our experiments that our
approach is beneficial for a broad range of datasets, and il-
lustrate that our dynamic strategy allows for high perfor-
mances, close to optimal fixed strategies that cannot be de-
termined beforehand.
|
Tong_Seeing_Through_the_Glass_Neural_3D_Reconstruction_of_Object_Inside_CVPR_2023 | Abstract
In this paper, we define a new problem of recovering
the 3D geometry of an object confined in a transparent en-
closure. We also propose a novel method for solving this
challenging problem. Transparent enclosures pose chal-
lenges of multiple light reflections and refractions at the
interface between different propagation media e.g. air or
glass. These multiple reflections and refractions cause seri-
ous image distortions which invalidate the single viewpoint
assumption. Hence the 3D geometry of such objects can-
not be reliably reconstructed using existing methods, such
as traditional structure from motion or modern neural re-
construction methods. We solve this problem by explicitly
modeling the scene as two distinct sub-spaces, inside and
outside the transparent enclosure. We use an existing neu-
ral reconstruction method (NeuS) that implicitly represents
the geometry and appearance of the inner subspace. In or-
der to account for complex light interactions, we develop a
hybrid rendering strategy that combines volume rendering
with ray tracing. We then recover the underlying geome-
try and appearance of the model by minimizing the differ-
ence between the real and rendered images. We evaluate
our method on both synthetic and real data. Experiment
results show that our method outperforms the state-of-the-
art (SOTA) methods. Codes and data will be available at
https://github.com/hirotong/ReNeuS
| 1. Introduction
Digitization of museum collections is an emerging topic
of increasing interest, especially with the advancement of
virtual reality and Metaverse technology. Many famous mu-
seums around the world have been building up their own
digital collections1 2for online exhibitions. Among these
collections, there is a kind of special but important collec-
1https://collections.louvre.fr/en/
2https://www.metmuseum.org/
(a) Reference Image
(b) COLMAP [26]
(c) NeuS [31]
(d)Ours
Figure 1. We propose a new research problem of recovering shape
of objects inside a transparent container as shown in (a) and solve it
by a physically-based neural reconstruction solution. Other meth-
ods (b) and (c) without consideration of the optical effects fail to
reconstruct, (d) our proposed ReNeuS can retrieve high-quality
mesh of the insect object.
tions such as insects [7, 23], human tissues [30], aquatic
creatures [6] and other fragile specimens that need to be
preserved inside some hard but transparent materials ( e.g.
resin, glass) as shown in Fig. 1a. In order to digitize such
collections, we abstract a distinct research problem which is
seeing through the glassy outside and recovering the geom-
etry of the object inside. To be specific, the task is to recon-
struct the 3D geometry of an object of interest from multiple
2D views of the scene. The object of interest is usually con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12555
tained in a transparent enclosure, typically a cuboid or some
regular-shaped container.
A major challenge of this task is the severe image dis-
tortion caused by repeated light reflection and refraction at
the interface of air and transparent blocks. Since it inval-
idates the single viewpoint assumption [9], most existing
multi-view 3D reconstruction methods tend to fail.
One of the most relevant research problems is 3D re-
construction through refractive interfaces, known as Refrac-
tive Structure-from-Motion (RSfM) [1, 4]. In RSfM, one
or more parallel interfaces between different propagation
media, with different refractive indices, separate the opti-
cal system from the scene to be captured. Light passing
through the optical system refracts only once at each inter-
face causing image distortion. Unlike RSfM methods, we
deal with intersecting interfaces where multiple reflections
and refractions take place. Another related research topic
is reconstruction of transparent objects [13, 17, 19]. In this
problem, lights only get refracted twice while entering and
exiting the target. In addition to the multiple refractions
considered by these methods, our method also tackles the
problem of multiple reflections within the transparent en-
closure.
Recently, neural implicit representation methods [21,22,
28, 29, 31] achieve state-of-the-art performance in the task
of novel view synthesis and 3D reconstruction showing
promising ability to encode the appearance and geometry.
However, these methods do not consider reflections. Al-
though NeRFReN [12] attempts to incorporate the reflec-
tion effect, it cannot handle more complicated scenarios
with multiple refraction and reflection.
In order to handle such complicated cases and make the
problem tractable, we make the following two reasonable
simplifications:
i.Known geometry and pose of the transparent block.
It is not the focus of this paper to estimate the pose of
the interface plane since it is either known [1,4] or cal-
culated as a separate research problem [14] as is typi-
cally assumed in RSfM.
ii.Homogeneous background and ambient lighting.
Since the appearance of transparent objects is highly
affected by the background pattern, several transparent
object reconstruction methods [13, 19] obtain ray cor-
respondence with a coded background pattern. To con-
centrate on recovering the shape of the target object, we
further assume monochromatic background with only
homogeneous ambient lighting conditions similar to a
photography studio.
To handle both Reflection and Refraction in 3D recon-
struction, we propose ReNeuS , a novel extension of a neu-
ral implicit representation method NeuS [31]. In lieu ofdealing with the whole scene together, a divide-and-conquer
strategy is employed by explicitly segmenting the scene into
internal space, which is the transparent container containing
the object, and an external space which is the surrounding
air space. For the internal space, we use NeuS [31] to en-
code only the geometric and appearance information. For
the external space, we assume a homogeneous background
with fixed ambient lighting as described in simplification ii.
In particular, we use a novel hybrid rendering strategy that
combines ray tracing with volume rendering techniques to
process the light interactions across the two sub-spaces. The
main contributions of this paper are as follows:
• We introduce a new research problem of 3D recon-
struction of an object confined in a transparent enclo-
sure, a rectangular box in this paper.
• We develop a new 3D reconstruction method, called
ReNeuS, that can handle multiple light reflections and
refractions at intersecting interfaces. ReNeuS employs
a novel hybrid rendering strategy to combine the light
intensities from the segmented inner and outer sub-
spaces.
• We evaluate our method on both synthetic as well as
a newly captured dataset that contains 10real insect
specimens enclosed in a transparent glass container.
|
Sun_Stimulus_Verification_Is_a_Universal_and_Effective_Sampler_in_Multi-Modal_CVPR_2023 | Abstract
To comprehensively cover the uncertainty of the future,
the common practice of multi-modal human trajectory pre-
diction is to first generate a set/distribution of candidate fu-
ture trajectories and then sample required numbers of tra-
jectories from them as final predictions. Even though a
large number of previous researches develop various strong
models to predict candidate trajectories, how to effectively
sample the final ones has not received much attention yet.
In this paper, we propose stimulus verification, serving as
a universal and effective sampling process to improve the
multi-modal prediction capability, where stimulus refers to
the factor in the observation that may affect the future
movements such as social interaction and scene context.
Stimulus verification introduces a probabilistic model, de-
noted as stimulus verifier, to verify the coherence between a
predicted future trajectory and its corresponding stimulus.
By highlighting prediction samples with better stimulus-
coherence, stimulus verification ensures sampled trajecto-
ries plausible from the stimulus’ point of view and there-
fore aids in better multi-modal prediction performance. We
implement stimulus verification on five representative pre-
diction frameworks and conduct exhaustive experiments on
three widely-used benchmarks. Superior results demon-
strate the effectiveness of our approach.
| 1. Introduction
Human trajectory prediction [3, 8, 9, 17, 35, 38, 39, 43]
is a vital task towards autonomous driving systems and
social robots, bridging the perception and decision mod-
ules [4, 26, 27, 42]. Due to the fact that there is no single
correct future, one of the most important aspects of trajec-
tory prediction lies in multi-modal prediction, studying to
*contributed equally
§Cewu Lu is the corresponding author, the member of Qing Yuan Re-
search Institute and MoE Key Lab of Artificial Intelligence, AI Institute,
Shanghai Jiao Tong University, China and Shanghai Qi Zhi institute.
ℳ 𝛩
(a)
(b)
Possible Prediction Impossible Prediction Observed Trajectory
Sampled Trajectory
Distribution
SetFigure 1. (a) Multi-modal trajectory prediction pipeline, including
two major components: base prediction model Mand sampler Θ.
(b) Examples of stimulus(context)-inconsistent trajectories sam-
pled from a stimulus(context)-aware base prediction model, So-
phie [33]. Predictions in red leads to impossible regions.
cover multiple distinct future possibilities with finite pre-
dictions. Specifically, for an observation, a set/distribution
of candidate trajectories is first generated by a base predic-
tion model, after which final ones of required number are
sampled from the candidates, seeing Fig. 1-a.
To guarantee the performance of multi-modal trajectory
prediction, countless efforts have been put into the design
of prediction models in prior works using either gener-
ative or classification frameworks. For generative ones,
some popular architectures such as GAN [7], V AE [14]
and Normalizing Flow [29] have been applied and im-
proved in plenty of studies on multi-modal trajectory pre-
diction [8, 23, 33, 34, 40]. In the meantime, impressive re-
sults are also achieved with classification models such as
MultiPath [3] and PCCSNet [39]. Yet it is often neglected
that an effective sampling strategy, which selects the ap-
propriate trajectories from the collection of candidates, also
has a great impact on the prediction performance. Typically,
random or Monte-Carlo sampling is adopted as the default
one. Only a handful of works [2, 22, 46] have studied on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22014
sampling strategy, while they all seek to solve this prob-
lem by searching better latent vectors for generative frame-
works. Although they have achieved remarkable outcomes,
the applications of such approaches are strongly limited as
they are confined to generative frameworks only, ignoring
the other mainstream frameworks, i.e.classification ones.
Besides, there are some factors in the observation that
may influence the future movements. These factors, such
as social interaction and scene context, are referred as stim-
uli[32] and their significance of guiding accurate predic-
tions has been proven in numerous researches [15, 25, 33,
34]. Although base frameworks can learn from stimu-
lus information to generate candidate predictions, if such
information is missing in the sampling process, out-of-
distribution predictions are still highly probable to be sam-
pled and harm the overall accuracy, as illustrated in Fig. 1-b.
To raise a universal sampler taking stimulus into con-
sideration, in this paper, we propose an explicit sampling
process called stimulus verification. It first verifies the co-
herence between a candidate trajectory and its correspond-
ing stimulus, and then samples highly stimulus-coherent
ones as final results. Since the verification is an external
process beyond the base prediction model, it can be uni-
versally applied to any multi-modal prediction frameworks.
And meanwhile, stimulus information is successfully intro-
duced, ensuring that the selected samples comply with the
constrains of the involved stimuli.
Stimulus verification is built on a conditionally parame-
terized probabilistic model optimized by Maximum Likeli-
hood Estimation, namely stimulus verifier. By feeding the
candidate trajectory along with its stimulus into the veri-
fier, corresponding output likelihood reveals the trajectory-
stimulus coherence, where a higher likelihood indicates bet-
ter coherence. To this end, we can map the likelihood to a
coherence score, which further serves as the basis to sample
more stimulus-coherent predictions from a large group of
trajectory candidates.
Our method is simple to implement for any base
prediction framework in a plug-and-play manner. We
implement our approach on five representative frame-
works [8, 29, 33, 39, 43] and conduct exhaustive experi-
ments on three widely-used trajectory prediction bench-
marks, i.e.ETH [28]/UCY [16] Dataset, Grand Central Sta-
tion Dataset [47] and NBA SportVU Dataset [20], to vali-
date our approach. Superior results confirm the effective-
ness of stimulus verification.
|
Song_OPE-SR_Orthogonal_Position_Encoding_for_Designing_a_Parameter-Free_Upsampling_Module_CVPR_2023 | Abstract
Arbitrary-scale image super-resolution (SR) is often
tackled using the implicit neural representation (INR) ap-
proach, which relies on a position encoding scheme to im-
prove its representation ability. In this paper, we introduce
orthogonal position encoding (OPE), an extension of po-
sition encoding, and an OPE-Upscale module to replace
the INR-based upsampling module for arbitrary-scale im-
age super-resolution. Our OPE-Upscale module takes 2D
coordinates and latent code as inputs, just like INR, but
does not require any training parameters. This parameter-
free feature allows the OPE-Upscale module to directly
perform linear combination operations, resulting in con-
tinuous image reconstruction and achieving arbitrary-scale
image reconstruction. As a concise SR framework, our
method is computationally efficient and consumes less mem-
ory than state-of-the-art methods, as confirmed by exten-
sive experiments and evaluations. In addition, our method
achieves comparable results with state-of-the-art methods
in arbitrary-scale image super-resolution. Lastly, we show
that OPE corresponds to a set of orthogonal basis, validat-
ing our design principle.1
| 1. Introduction
Photographs are composed of discrete pixels of vary-
ing precision due to the limitations of sampling frequency,
which breaks the continuous visual world into discrete
parts. The single image super-resolution (SISR) task aims
to restore the original continuous world in the image as
much as possible. In an arbitrary-scale SR task, one of-
ten reconstructs the continuous representation of a low-
resolution image and then adjusts the resolution of the target
image as needed. The recent rise of implicit neural repre-
sentation (INR) in 3D vision has enabled the representation
*Corresponding author.
1Project page: https://github.com/gaochao-s/ope-srof complex 3D objects and scenes in a continuous man-
ner [14,19,41,42,44,45,47,49,57,58], which also opens up
possibilities for continuous image and arbitrary-scale image
super-resolution [5, 18, 32, 72].
Existing methods for arbitrary-scale SR typically use a
post-upsampling framework [70]. In this approach, low-
resolution (LR) images first pass through a deep CNN
network (encoder) without improving the resolution, and
then pass through an INR-based upsampling module (de-
coder) with a specified target resolution to reconstruct high-
resolution (HR) images. The decoder establishes a mapping
from feature maps (the output of encoder) to target image
pixels using a pre-assigned grid partitioning and achieves
arbitrary-scale with the density of the grid in Cartesian co-
ordinate system. However, the INR approach has a defect of
learning low-frequency information, also known as spectral
bias [50]. To address this issue, sinusoidal positional en-
coding is introduced to embed input coordinates to higher
dimensions and enable the network to learn high-frequency
details. This inspired recent works on arbitrary-scale SR to
further improve the representation ability [32, 72].
Despite its effectiveness in arbitrary-scale SR, the INR-
based upsampling module increases the complexity of the
entire SR framework as two different networks are jointly
trained. Additionally, as a black-box model, it represents
a continuous image with a strong dependency on both the
feature map and the decoder (e.g., MLP). However, its rep-
resentation ability decreases after flipping the feature map,
a phenomenon known as flipping consistency decline. As
shown in Fig. 1, flipping the feature map horizontally be-
fore the upsampling module of LIIF results in a blurred tar-
get image that does not have the expected flip transforma-
tion. This decline could be due to limitations of the MLP in
learning the symmetry feature of the image.
MLP is a universal function approximator [17], which
tries to fit a mapping function from feature map to the con-
tinuous image, therefore, it is reasonable to assume that
such process could be solved by an analytical solution. In
this paper, we re-examine position encoding from the per-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10009
spective of orthogonal basis and propose orthogonal posi-
tion encoding (OPE) for continuous image representation.
The linear combination of 1D latent code and OPE can
directly reconstruct continuous image patch without using
implicit neural function [5]. To prove OPE’s rationality,
we analyse it both from functional analysis and 2D-Fourier
transform. We further embed it into a parameter-free up-
sampling module, called OPE-Upscale Module, to replace
INR-based upsampling module in deep SR framework, then
currently deep SR framework can be greatly simplified.
Unlike the state-of-the-art method by Lee et al. [32],
which enhances MLP with position encoding, we explore
the possibility of extending position encoding without MLP.
By providing a more concise SR framework, our method
achieves high computing efficiency and consumes less
memory than the state-of-the-art, while also achieving com-
parable image performance in arbitrary-scale SR tasks.
Our contributions are as follows:
• We propose a novel position encoding, called orthogo-
nal position encoding (OPE), which takes the form of
a 2D-Fourier series and corresponds to a set of orthog-
onal basis. Building on OPE, we introduce the OPE-
Upscale Module, a parameter-free upsampling module
for arbitrary-scale image super-resolution.
• Our method significantly reduces the consumption of
computing resources, resulting in high computing effi-
ciency for arbitrary-scale SR tasks.
• The OPE-Upscale Module is interpretable, parameter-
free and does not require training, resulting in a con-
cise SR framework that elegantly solves the flipping
consistency problem.
• Extensive experiments demonstrate that our method
achieves comparable results with the state-of-the-art.
Furthermore, our method enables super-resolution up
to a large scale of ×30.
|
Sosea_MarginMatch_Improving_Semi-Supervised_Learning_with_Pseudo-Margins_CVPR_2023 | Abstract
We introduce MarginMatch, a new SSL approach combin-
ing consistency regularization and pseudo-labeling, with its
main novelty arising from the use of unlabeled data train-
ing dynamics to measure pseudo-label quality. Instead of
using only the model’s confidence on an unlabeled example
at an arbitrary iteration to decide if the example should be
masked or not, MarginMatch also analyzes the behavior of
the model on the pseudo-labeled examples as the training
progresses, to ensure low quality predictions are masked out.
MarginMatch brings substantial improvements on four vi-
sion benchmarks in low data regimes and on two large-scale
datasets, emphasizing the importance of enforcing high-
quality pseudo-labels. Notably, we obtain an improvement in
error rate over the state-of-the-art of 3.25% on CIFAR-100
with only 25labels per class and of 3.78% on STL-10 using
as few as 4labels per class. We make our code available at
https://github.com/tsosea2/MarginMatch .
| 1. Introduction
Deep learning models have seen tremendous success in
many vision tasks [14, 22, 27, 42, 43]. This success can be
attributed to their scalability, being able to produce better re-
sults when they are trained on large datasets in a supervised
fashion [15, 27, 34, 35, 43, 47]. Unfortunately, large labeled
datasets annotated for various tasks and domains are diffi-
cult to acquire and demand considerable annotation effort
or domain expertise. Semi-supervised learning (SSL) is a
powerful approach that mitigates the requirement for large
labeled datasets by effectively making use of information
from unlabeled data, and thus, has been studied extensively
in vision [4, 5, 23, 25, 30, 36, 38, 39, 44–46].
Recent SSL approaches integrate two important com-
ponents: consistency regularization [46, 49] and pseudo-
labeling [25]. Consistency regularization works on the as-
sumption that a model should output similar predictions
when fed perturbed versions of the same image, whereas
pseudo-labeling uses the model’s predictions of unlabeled
examples as labels to train against. For example, Sohn et
al. [41] introduced FixMatch that combines consistency reg-ularization on weak and strong augmentations with pseudo-
labeling. FixMatch relies heavily on a high-confidence
threshold to compute the unsupervised loss, disregarding any
pseudo-labels whose confidence falls below this threshold.
While training using only high-confidence pseudo-labels has
shown to consistently reduce the confirmation bias [1], this
rigid threshold allows access only to a small amount of un-
labeled data for training, and thus, ignores a considerable
amount of unlabeled examples for which the model’s predic-
tions do not exceed the confidence threshold. More recently,
Zhang et al. [49] introduced FlexMatch that relaxes the rigid
confidence threshold in FixMatch to account for the model’s
learning status of each class in that it adaptively scales down
the threshold for a class to encourage the model to learn
from more examples from that class. The flexible thresh-
olds in FlexMatch allow the model to have access to a much
larger and diverse set of unlabeled data to learn from, but
lowering the thresholds can lead to the introduction of wrong
pseudo-labels, which are extremely harmful for generaliza-
tion. Interestingly, even when the high-confidence threshold
is used in FixMatch can result in wrong pseudo-labels. See
Figure 1 for incorrect pseudo-labels detected in the training
set after we apply FixMatch and FlexMatch on ImageNet.
We posit that a drawback of FixMatch and FlexMatch and
in general of any pseudo-labeling approach is that they use
the confidence of the model only at the current iteration
to enforce quality of pseudo-labels and completely ignore
model’s predictions at prior iterations.
In this paper, we propose MarginMatch, a new SSL ap-
proach that monitors the behavior of the model on the unla-
beled examples as the training progresses, from the begin-
ning of tr |
Tao_GALIP_Generative_Adversarial_CLIPs_for_Text-to-Image_Synthesis_CVPR_2023 | Abstract
Synthesizing high-fidelity complex images from text is
challenging. Based on large pretraining, the autoregres-
sive and diffusion models can synthesize photo-realistic im-
ages. Although these large models have shown notable
progress, there remain three flaws. 1) These models re-
quire tremendous training data and parameters to achieve
good performance. 2) The multi-step generation design
slows the image synthesis process heavily. 3) The synthe-
sized visual features are challenging to control and require
delicately designed prompts. To enable high-quality, effi-
cient, fast, and controllable text-to-image synthesis, we pro-
pose Generative Adversarial CLIPs, namely GALIP . GALIP
leverages the powerful pretrained CLIP model both in the
discriminator and generator. Specifically, we propose a
CLIP-based discriminator. The complex scene understand-
ing ability of CLIP enables the discriminator to accurately
assess the image quality. Furthermore, we propose a CLIP-
empowered generator that induces the visual concepts from
CLIP through bridge features and prompts. The CLIP-
integrated generator and discriminator boost training ef-
ficiency, and as a result, our model only requires about 3%
training data and 6%learnable parameters, achieving com-
parable results to large pretrained autoregressive and diffu-
sion models. Moreover, our model achieves ∼120×faster
synthesis speed and inherits the smooth latent space from
GAN. The extensive experimental results demonstrate the
excellent performance of our GALIP . Code is available at
https://github.com/tobran/GALIP .
| 1. Introduction
Over the last few years, we have witnessed the great suc-
cess of generative models for various applications [4, 47].
Among them, text-to-image synthesis [3, 5, 16, 19–22, 26,
29, 30, 34, 43, 48, 50–53, 60] is one of the most appealing
*Corresponding Author
Figure 1. (a) Existing text-to-image GANs conduct adversarial
training from scratch. (b) Our proposed GALIP conducts adver-
sarial training based on the integrated CLIP model.
applications. It generates high-fidelity images according to
given language guidance. Owing to the convenience of lan-
guage for users, text-to-image synthesis has attracted many
researchers and has become an active research area.
Based on a large scale of data collections, model size,
and pretraining, recently proposed large pretrained autore-
gressive and diffusion models, e.g., DALL-E [34] and
LDM [36], show the impressive generative ability to syn-
thesize complex scenes and outperform the previous text-to-
image GANs significantly. Although these large pretrained
generative models have achieved significant advances, they
still suffer from three flaws. First, these models require
tremendous training data and parameters for pretraining.
The large data and model size brings an extremely high
computing budget and hardware requirements, making it
inaccessible to many researchers and users. Second, the
generation of large models is much slower than GANs.
The token-by-token generation and progressive denoising
require hundreds of inference steps and make the generated
results lag the language inputs seriously. Third, there is no
intuitive smooth latent space as GANs, which maps mean-
ingful visual attributes to the latent vector. The multi-step
generation design breaks the synthesis process and scatters
the meaningful latent space. It makes the synthesis process
require delicately designed prompts to control.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14214
To address the above limitations, we rethink Generative
Adversarial Networks (GAN). GANs are much faster than
autoregressive and diffusion models and have smooth latent
space, which enables more controllable synthesis. How-
ever, GAN models are known for potentially unstable train-
ing and less diversity in the generation [7]. It makes current
text-to-image GANs suffer from unsatisfied synthesis qual-
ity under complex scenes.
In this work, we introduce the pretrained CLIP [31] into
text-to-image GANs. The large pretraining of CLIP brings
two advantages. First, it enhances the complex scene un-
derstanding ability. The pretraining dataset has many com-
plex images under different scenes. Armed with the Vision
Transformer (ViT) [9], the image encoder can extract in-
formative and meaningful visual features from complex im-
ages to align the corresponding text descriptions after ade-
quate pretraining. Second, the large pretraining dataset also
enables excellent domain generalization ability. It contains
various kinds of images, e.g., photos, drawings, cartoons,
and sketches, collected from a variety of publicly available
sources. The various images make the CLIP model can map
different kinds of images to the shared concepts and en-
able impressive domain generalization and zero-shot trans-
fer ability. These two advantages of CLIP, complex scene
understanding and domain generalization ability, motivate
us to build a more powerful text-to-image model.
We propose a novel text-to-image generation framework
named Generative Adversarial CLIPs (GALIP). As shown
in Figure 1, the GALIP integrates the CLIP model [31] in
both the discriminator and generator. To be specific, we pro-
pose the CLIP-based discriminator and CLIP-empowered
generator. The CLIP-based discriminator inherits the com-
plex scene understanding ability of CLIP [31]. It is com-
posed of a frozen ViT-based CLIP image encoder (CLIP-
ViT) and a learnable mate-discriminator (Mate-D). The
Mate-D is mated to the CLIP-ViT for adversarial training.
To retain the knowledge of complex scene understanding
in the CLIP-ViT, we freeze its weights and collect the pre-
dicted CLIP image features from different layers. Then,
the Mate-D further extracts informative visual features from
collected CLIP features to distinguish the synthesized and
real images. Based on the complex scene understanding
ability of CLIP-ViT and the continuous analysis of Mate-
D, the CLIP-based discriminator can assess the quality of
generated complex images more accurately.
Furthermore, we propose the CLIP-empowered genera-
tor, which exerts the domain generalization ability of CLIP
[31]. It is hard for the generator to synthesize complex im-
ages directly. Some works employ sketch [11] and lay-
out [21, 23] as bridge domains to alleviate the difficulty.
However, such a design requires additional labeled data.
Different from these works, the excellent domain general-
ization of CLIP [31] motivates us that there may be an im-plicit bridge domain, which is easier to synthesize but can
be mapped to the same visual concepts through the CLIP-
ViT. Thus, we design the CLIP-empowered generator. It
is composed of a frozen CLIP-ViT and a learnable mate-
generator (Mate-G). The Mate-G first predicts the implicit
bridge features from text and noise. Then the bridge feature
will be mapped to the visual concepts through CLIP-ViT.
Furthermore, we add some text-conditioned prompts to the
CLIP-ViT for task adaptation. The predicted visual con-
cepts close the gap between text features and target images
which enhances the complex image synthesis ability.
Overall, our contributions can be summarized as follows:
• We propose an efficient, fast, and more controllable
model for text-to-image synthesis that can synthesize
high-quality complex images.
• We propose the CLIP-based discriminator, which as-
sesses the quality of complex images more accurately.
• We propose the CLIP-empowered generator, which syn-
thesizes images based on text features and predicted CLIP
visual features.
• Extensive experiments demonstrate that the proposed
GALIP can achieve comparable performance with large
pertaining models based on significantly smaller compu-
tational costs.
|
Sun_Ultrahigh_Resolution_ImageVideo_Matting_With_Spatio-Temporal_Sparsity_CVPR_2023 | Abstract
Commodity ultrahigh definition (UHD) displays are be-
coming more affordable which demand imaging in ultrahigh
resolution (UHR). This paper proposes SparseMat , a com-
putationally efficient approach for UHR image/video mat-
ting. Note that it is infeasible to directly process UHR im-
ages at full resolution in one shot using existing matting
algorithms without running out of memory on consumer-
level computational platforms, e.g., Nvidia 1080Ti with 11G
memory, while patch-based approaches can introduce un-
sightly artifacts due to patch partitioning. Instead, our
method resorts to spatial and temporal sparsity for ad-
dressing general UHR matting. When processing videos,
huge computation redundancy can be reduced by exploit-
ing spatial and temporal sparsity. In this paper, we show
how to effectively detect spatio-temporal sparsity, which
serves as a gate to activate input pixels for the matting
model. Under the guidance of such sparsity, our method
with sparse high-resolution module (SHM) can avoid patch-
based inference while memory efficient for full-resolution
matte refinement. Extensive experiments demonstrate that
SparseMat can effectively and efficiently generate high-
quality alpha matte for UHR images and videos at the orig-
inal high resolution in a single pass. Project page is in
https://github.com/nowsyn/SparseMat.git.
| 1. Introduction
Ultrahigh resolution (UHR) matting is an important
problem [36, 49], and with increasing demand due to the
fast advent and accessibility of commodity ultrahigh def-
inition displays in real-world applications, such as gam-
ing, TV/movie post-production, and image/video editing,
UHR matting becomes ever relevant. However, modern
consumer-level GPU and mobile devices still have limited
hardware resources. Despite good technical contributions,
guided filters and patch-based techniques (Figure 1) are not
applicable, when unsightly blurry and seams artifacts are
unacceptable in UHR imaging.
Matting is a primary technique for image/video editing
and plays an important role in many applications. The goal
of matting is to extract a detailed alpha matte of the fore-ground object from a given image/video. The matted fore-
ground can be composited on other background images. As
known, matting is an ill-posed problem defined as Equa-
tion 1 with the given image I, foreground F, background B
and alpha α∈[0,1]to be extracted:
I=αF+ (1−α)B. (1)
Most of the existing state-of-the-art matting meth-
ods [28, 33, 48] take the whole image as input in a forward
pass, and thus the resolution they can handle is bounded
by available memory. Given limited memory, to process
UHR images, a straightforward approach is to first process
the downsampled input image which will inevitably lead to
blurry artifacts. Thus, super-resolution methods, such as
guided filter (GF) [21] or deep guided filter (DGF) [47],
have been proposed to recover missing details. However,
guided filter or deep guided filter easily produces fuzzy ar-
tifacts when processing complex hairy structures as shown
in Figure 1-(a). Patch-based inference [23, 35] is another
plausible strategy. However, small patch can cause artifacts
due to insufficient global context and inconsistent local con-
text as shown in Figure 1-(b). On the other hand, using large
patch (e.g., 2048) with large overlap produces defective al-
pha matte with missing details or blurry artifacts in long hair
region due to the lack of long-range dependency in UHR
images as shown Figure 1-(b), not to mention that heavy
computation and memory overhead are introduced with in-
creasing patch size making some methods cannot even run,
as shown in Figure 1-(c). In conclusion, super resolving
based on (deep) guided filter or patch-based inference are
not ideal choices for handling UHR matting.
In this paper we propose a general image/video matting
framework SparseMat to address the problem of UHR mat-
ting, which is both memory and computation efficient while
generating high-quality alpha mattes. The core idea of our
method is to skip a large amount of redundant computation
on many pixels during processing UHR images or videos.
In general, our method takes the low-resolution prior as
input to generate spatio-temporal sparsity, which serves as
the gate to activate pixels consumed by a sparse convolution
module. Both temporal and spatial information contribute
to the sparsity estimation. Specifically, we compute color
difference between adjacent frames to obtain the temporal
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14112
Image
LPN + SHM (Ours)
LPNLPN + GF
(a)Patch=512
Patch=2048Image
Full-resolution(b)20040060080010001200
25651210242048GMACsPatchsize0246810
25651210242048Memory(GB)Patchsize
(c)
RVMBGMv2Figure 1. (a)Blurry artifacts when UHR images are not matted in the original full resolution: guided filter (GF), deep guided filter
(RVM [36]), small patch replacement method (BGMv2 [35]). The low-resolution alpha matte obtained by the self-trained low-resolution
prior network (LPN) is for reference here. Our sparse high-resolution module (SHM) produces high-quality alpha matte for UHR matting.
(b)Seam artifacts of patch-based inference from FBA [13] under different patch sizes with a fixed ratio (1/8) of overlap. Full-resolution
result is ours. (c)Memory consumption and computation (GMACs) with different patch sizes. When the patch size exceeds 2K, some
methods cannot even run on Nvidia 1080Ti with 11G memory.
sparsity. For the spatial sparsity, we can derive from any
lightweight low-resolution matting model such as [28, 36].
The two sparsity maps are combined together to determi-
nate the pixels which need high-resolution processing by
the sparse module. Unlike super-resolving strategy per-
formed on the whole image, our method with sparse high-
resolution module (SHM) can safely skip expensive com-
putations in large solid pixel regions, only paying attention
to irregular, sparse and (oftentimes) thin border regions sur-
rounding the object or transitional regions within the ob-
ject. In contrast to restrictive views of patch-based strategy
to local regions, our method takes a more global perspec-
tive of the foreground object thus avoiding potential arti-
facts due to inadequate context consideration. This design
allows SparseMat to process an ultrahigh resolution image
or video frame in only one shot without suffering any in-
formation loss caused by down-sampling or patch partition,
which thus produces high-quality alpha matte for ultrahigh
resolution images or videos.
Our contributions are summarized below:
1. This is the first work for general UHR image/video
matting which enables full-resolution inference in one
shot without running out of memory, thus eliminating
the need of patch partitioning and patch artifacts.
2. We show how to obtain accurate spatio-temporal spar-
sity for general UHR matting, which has never been
adequately discussed in previous works. In addition,
this is the first work that proposes to apply sparse con-
volution network to skip unnecessary computations in
dealing with UHR matting.
3. We conduct extensive experiments in multiple popu-
lar image/video matting datasets, including Adobe Im-
age Matting Dataset [48], VideoMatte240K [42] and
our self-collected UHR matting dataset, and providepromising qualitative results, which demonstrate the
superiority of our SparseMat in dealing with general
image/video matting.
|
Tian_Multi-Object_Manipulation_via_Object-Centric_Neural_Scattering_Functions_CVPR_2023 | Abstract
Learned visual dynamics models have proven effective
for robotic manipulation tasks. Yet, it remains unclear how
best to represent scenes involving multi-object interactions.
Current methods decompose a scene into discrete objects,
but they struggle with precise modeling and manipulation
amid challenging lighting conditions as they only encode
appearance tied with specific illuminations. In this work,
we propose using object-centric neural scattering functions
(OSFs) as object representations in a model-predictive con-
trol framework. OSFs model per-object light transport, en-
abling compositional scene re-rendering under object re-
arrangement and varying lighting conditions. By combin-
ing this approach with inverse parameter estimation and
graph-based neural dynamics models, we demonstrate im-
proved model-predictive control performance and general-
ization in compositional multi-object environments, even in
previously unseen scenarios and harsh lighting conditions.
| 1. Introduction
Predictive models are the core components of many
robotic systems for solving inverse problems such as plan-
ning and control. Physics-based models built on first prin-
ciples have shown impressive performance in domains such
as drone navigation [3] and robot locomotion [28]. How-
ever, such methods usually rely on complete a priori knowl-
edge of the environment, limiting their use in complicated
manipulation problems where full-state estimation is com-
plex and often impossible. Therefore, a growing number of
approaches alternatively propose to learn dynamics models
directly from raw visual observations [2, 12, 14, 17, 21, 48].
Although using raw sensor measurements as inputs to
predictive models is an attractive paradigm as they are
readily available, visual data can be challenging to work
with directly due to its high dimensionality. Prior meth-
ods proposed to learn dynamics models over latent vec-
indicates equal contribution. Yancheng is affiliated with Fudan Uni-
versity; this work was done while he was a summer intern at Stanford.
Typical visual manipulation settingSimulatedRealOur settings
Figure 1. While typically studied visual manipulation settings
are carefully controlled environments, we consider scenarios with
varying and even harsh lighting, in addition to novel object con-
figurations, that are more similar to real-world scenarios.
tors, demonstrating promising results in a range of robotics
tasks [17, 18, 44, 52]. However, with multi-object interac-
tions, the underlying physical world is 3D and composi-
tional. Encoding everything into a single latent vector fails
to consider the relational structure within the environment,
limiting its generalization outside the training distribution.
Another promising strategy is to build more structured
visual representations of the environment, including the use
of particles [31, 33, 36], keypoints [32, 37, 38], and object
meshes [22]. Among the structured representations, Driess
et al. [10] leveraged compositional neural implicit represen-
tations in combination with graph neural networks (GNNs)
for the dynamic modeling of multi-object interactions. The
inductive bias introduced by GNNs captures the environ-
ment’s underlying structure, enabling generalization to sce-
narios containing more objects than during training, and the
neural implicit representations allow precise estimation and
modeling of object geometry and interactions. However,
Driess et al. [10] only considered objects of uniform color
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9021
in well-lit scenarios. It is unclear how the method works
for objects with more complicated geometries and textures.
The lack of explicit modeling of light transport also limits
its use in scenarios of varying lighting conditions, especially
those vastly different from the training distributions.
In this paper, we propose to combine object-centric
neural scattering functions (OSFs) [57] and graph neural
networks for the dynamics modeling and manipulation of
multi-object scenes. OSFs explicitly model light transport
and learn to approximate the cumulative radiance transfer,
which allows relighting and inverse estimation of scenes
involving multiple objects and the change of lights, such
as those shown in Figure 1. Combined with gradient-free
evolutionary algorithms like covariance matrix adaption
(CMA), the learned neural implicit scattering functions sup-
port inverse parameter estimation, including object poses
and light directions, from visual observations. Based on the
estimated scene parameters, a graph-based neural dynamics
model considers the interactions between objects and pre-
dicts the evolution of the underlying system. The predictive
model can then be used within a model-predictive control
(MPC) framework for downstream manipulation tasks.
Experiments demonstrate that our method performs
more accurate reconstruction in harsh lighting conditions
compared to prior methods, producing higher-fidelity long
horizon prediction compared to video prediction models.
When combined with inverse parameter estimation, our en-
tire control pipeline improves on simulated object manipu-
lation tasks in settings with varying lighting and previously
unseen object configurations, compared to performing MPC
directly in image space.
We make three contributions. First, the use of neural
scattering functions supports inverse parameter estimation
in scenarios with challenging and previously unseen light-
ing conditions. Second, our method models the composi-
tionality of the underlying scene and can make long-term
future predictions about the system’s evolution to support
downstream planning tasks. Third, we conduct and show
successful manipulation of simulated multi-object scenes
involving extreme lighting directions.
|
Tang_Label_Information_Bottleneck_for_Label_Enhancement_CVPR_2023 | Abstract
In this work, we focus on the challenging problem of La-
bel Enhancement (LE), which aims to exactly recover label
distributions from logical labels, and present a novel Label
Information Bottleneck (LIB) method for LE. For the recov-
ery process of label distributions, the label irrelevant infor-
mation contained in the dataset may lead to unsatisfactory
recovery performance. To address this limitation, we make
efforts to excavate the essential label relevant information
to improve the recovery performance. Our method formu-
lates the LE problem as the following two joint processes:
1) learning the representation with the essential label rel-
evant information, 2) recovering label distributions based
on the learned representation. The label relevant informa-
tion can be excavated based on the “bottleneck” formed by
the learned representation. Significantly, both the label rel-
evant information about the label assignments and the label
relevant information about the label gaps can be explored in
our method. Evaluation experiments conducted on several
benchmark label distribution learning datasets verify the ef-
fectiveness and competitiveness of LIB. Our source codes
are available at https://github.com/qinghai-
zheng/LIBLE
| 1. Introduction
Learning with label ambiguity is important in computer
vision and machine learning. Different from the traditional
Multi-Label Learning (MLL), which employs multiple log-
ical labels to annotate one instance to address the label am-
biguity issue [20], Label Distribution Learning (LDL) con-
siders the relative importance of different labels and draws
much attention in recent years [6, 8, 14, 18, 26]. By distin-
guishing the description degrees of all labels, LDL anno-
tates one instance with a label distribution. Therefore, LDL
is a more general learning paradigm, MLL can be regarded
as a special case of LDL [8, 10, 12].
*Corresponding author, E-mail: [email protected], many LDL methods are proposed and achieve
great success in practice [3, 9, 14, 18]. Instances with exact
label distributions are vital for the training process of LDL
methods. Nevertheless, annotating instances with label dis-
tributions is time-consuming [24,28]. We take the label dis-
tribution annotation process of SJAFFE dataset for example
here. SJAFFE dataset is the facial expression dataset, which
contains 213 grayscale images collected from 10 Japanese
female models, each facial expression image is rated by 60
persons on 6 basic emotions, including happiness, surprise,
sadness, fear, anger, and disgust, with a five-level scale from
1 - 5, the higher value indicates the higher emotion intensity.
Consequently, the average score of each emotion is served
as the emotion label distribution [14,28]. Clearly, the above
annotation process is costly and it is unpractical to annotate
data with label distributions manually, especially when the
number of data is large. Fortunately, most existing datasets
in the field of computer vision and machine learning are an-
notated by single-label or multi-labels [7, 29], therefore, a
highly recommended promising solution is Label Enhance-
ment (LE), which attempts to recover the desired label dis-
tributions exactly from existing logical labels [24, 28, 32].
Driven by the urgent requirement of obtaining label dis-
tributions and the convenience of LE, some LE methods are
proposed in recent years [5, 7, 11, 13, 15, 17, 21, 24, 28, 29].
Given a dataset X={x1,x2,···,xn} ∈Rq×n, in which
qandndenote the number of dimensions and the number
of instances, the potential label set is {y1, y2,···, yc}. The
available logical labels and the desired distribution labels
ofXare separately indicated by L={l1,l2,···,ln}and
D={d1,d2,···,dn}, where lianddiare:
li= (ly1
i, ly2
i,···, lyc
i)T,di= (dy1
i, dy2
i,···, dyc
i)T.(1)
To be specific, LE aims to recover Dbased on the informa-
tion provided by XandL. For most existing LE methods,
their objectives can be concisely summarized as follows:
min
θ∥fθ(X)−L∥2
F+γreg(fθ(X)), (2)
in which D=fθ(X),fθ(·)indicates the mapping from
XtoD,reg(·)denotes the regularization function, and γ
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7497
Logical label
Description Degree
Figure 1. Illustration of label relevant information. Excavating the
essential label relevant information directly is challenging and we
adopt a indirect way here. We jointly investigate the information
about the assignments of labels to the instance and the informa-
tion about the label gaps between logical labels and label distribu-
tions. Given the i-th instance xi, the label gap of the yjlabel is
δyj
i=lyj
i−dyj
i. The information contained in lyj
iandδyj
ican
be amalgamated to form the essential label relevant information.
In other words, we employ lyj
ito explore the label relevant infor-
mation about the label assignments and δyj
ito excavate the label
relevant information about the label gaps. To a certain degree, the
combination of δyj
iandlyj
iis equivalent to dyj
i. As depicted here,
lyj
iindicates that y1andy3are related labels and δyj
iprovides the
importance of y1andy3.
is the trade-off parameter. Most existing LE methods vary
inreg(·). For example, GLLE [28] calculates the distance-
based similarity matrix of data and employs the smoothness
assumption [33] to construct reg(·); LESC [24] considers
the global sample correlations and introduces the low-rank
constraint as the regularization; PNLR [13] leverages reg(·)
to maintain positive and negative label relations during the
recovery process. Although a remarkable progress can be
made by aforementioned methods, they ignore the label ir-
relevant information contained in X, which prevents the
further improvement of recovery results. For example, in
the LE task of recovering facial age label distributions, the
label irrelevant information, such as specularities informa-
tion, cast shadows information, and occlusions information,
may result in the incorrect mapping process of fθ(·)and the
unsuitable regularization of reg(·), eventually leads to the
unsatisfactory recovery performance.
To overcome the aforementioned limitation, we present
a Label Information Bottleneck (LIB) method for LE. Con-
cretely, the core idea of LIB is to learn the latent representa-
tionH, which preserves the maximum label relevant infor-
mation, from X, and jointly recovers the label distributions
based on the latent representation. For the LE problem, the
label relevant information is the information that describes
the description degrees of labels. It is tough to explore the
label relevant information directly. As shown in Fig. 1, wedecompose the label relevant information into two compo-
nents, namely the assignments of labels to the instance and
the label gaps between label distributions and logical labels.
Inspired by Information Bottleneck (IB) [25], LIB utilizes
the existing logical labels to explore the information about
the assignments of labels to the instance. Unlike simply em-
ploying the original IB on the LE task, our method further
considers the information about the label gaps between la-
bel distributions and logical labels. It is noteworthy that the
above two components of the label relevant information are
jointly explored in our method, and that is why we term the
proposed method Label Information Bottleneck (LIB). The
main contributions can be summarized as follows:
•We decompose the label relevant information into the
information about the assignments of labels to instance
and the information about the label gaps between logi-
cal labels, both of which can be jointly explored during
the learning process of our method.
•We introduce a novel LE method, termed LIB, which
excavates the label relevant information to exactly re-
cover the label distributions. Based on the original IB,
which explores the label assignments information for
LE, LIB further explores the label gaps information.
•We verify the effectiveness of LIB by performing ex-
tensive experiments on several datasets. Experimental
results show that the proposed method can achieve the
competitive performance, compared to state-of-the-art
LE methods.
|
Tu_Toward_Accurate_Post-Training_Quantization_for_Image_Super_Resolution_CVPR_2023 | Abstract
Model quantization is a crucial step for deploying super
resolution (SR) networks on mobile devices. However, exist-
ing works focus on quantization-aware training, which re-
quires complete dataset and expensive computational over-
head. In this paper, we study post-training quantization
(PTQ) for image super resolution using only a few unla-
beled calibration images. As the SR model aims to maintain
the texture and color information of input images, the distri-
bution of activations are long-tailed, asymmetric and highly
dynamic compared with classification models. To this end,
we introduce the density-based dual clipping to cut off the
outliers based on analyzing the asymmetric bounds of acti-
vations. Moreover, we present a novel pixel aware calibra-
tion method with the supervision of the full-precision model
to accommodate the highly dynamic range of different sam-
ples. Extensive experiments demonstrate that the proposed
method significantly outperforms existing PTQ algorithms
on various models and datasets. For instance, we get a
2.091 dB increase on Urban100 benchmark when quantiz-
ing EDSR4 to 4-bit with 100 unlabeled images. Our code
is available at both PyTorch and MindSpore.
| 1. Introduction
Image super resolution (SR) is a classical image pro-
cessing task in computer vision, which reconstructs high-
resolution (HR) images from the corresponding low-
resolution (LR) images. SR has been widely applied in
the real-world scenarios, such as medical imaging [12, 35],
surveillance [1, 49], satellite imagery [31, 36] and smart-
phone display [8, 19]. With the rapid development of deep
learning in recent years, SR models with deep neural net-
work (DNN) structure have continued to achieve state-of-
the-art performance on various datasets. However, these
SR models require significant storage and computational re-
sources, which makes their deployment on mobile devices
extremely difficult. To improve the inference efficiency,
various techniques have been proposed to compress the
models, such as network pruning [16, 50], model quantiza-Table 1. Computational overhead of different quantization meth-
ods on EDSR model. The FP denotes full-precision training, the
Gt denotes the ground-truth, and the Bs denotes batch size.
Method Type Data Gt Bs Iters Run time
EDSR [28] FP 800 316 15,000 240
PAMS [25] QAT 800 316 1,500 24
FQSR [40] QAT 800 316 15,000 120
CADyQ [14] QAT 800 3 8 30,000 240
DAQ [15] QAT 800 3 4 300,000 1200
DDTB [52] QAT 800 316 3,000 48
Ours PTQ 100 5 2 500 1
tion [13, 38], compact architecture design [8, 9] and knowl-
edge distillation [29, 41, 45, 46]. Among these approaches,
model quantization is much benefit to existing artificial in-
telligent (AI) accelerators [3, 42], which generally focus on
low-precision arithmetic, resulting in lower latency, smaller
memory footprint and less energy consumption.
Although the previous SR quantization methods make
great effort on improving the performance with given bit-
width, their main drawback is that they require quantiza-
tion aware training (QAT) with complete datasets and ex-
pensive computational overhead. As shown in Table 1,
the full-precision EDSR model needs to train 15,000 iters
with the batch size of 16, takes 8 days on NVIDIA Titan X
GPUs [28]. To recover the performance drop of the quan-
tized models, most methods also need to train with the same
iterative steps on the complete training dataset, in which one
training step in QAT actually takes more GPU memory and
longer running time than those of the regular floating-point.
On the contrast, post-training quantization (PTQ) only
requires a few unlabeled calibration images without train-
ing, which enables fast deployment on various devices
within minutes. Nevertheless, different from the image clas-
sification, super resolution requires accurate prediction for
each pixel of the output images, which is much sensitive
to low-bit compression for feature maps. Figure 2 shows
the original floating-point activations of different layers and
samples, we observe three properties of their distributions
that are much unfriendly to quantization: (1) Long-tailed :
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5856
𝑢 𝑢 𝑙𝐻𝑙<𝐻𝑢
𝑙=𝑙+∆𝐻𝑙>𝐻𝑢
𝑢=𝑢−∆
𝑙 𝑢 ∆ሼ𝑙∆ሼ∆ሼ……
𝑙𝑢∆ሼFull-precision model
Step 1: Density -based Dual ClippingFull-precision model
with ሼ𝑙𝑎,𝑢𝑎,𝑙𝑤,𝑢𝑤}𝐾
Quantized model
with ሼ𝑙𝑎∗,𝑢𝑎,∗,𝑙𝑤∗,𝑢𝑎∗}𝐾
Step 2: Pixel -aware CalibrationFull-precision neuron
Quantized neuronpixel transfer loss pixel transfer lossonly optimize ሼ𝑙𝑎,𝑢𝑎}𝐾
L1 loss L1 lossonly optimize ሼ𝑙𝑤,𝑢w}𝐾Figure 1. The overview of the proposed post-training quantization framework for image super resolution.
the distribution shows to be dense in the middle yet sparse
in the tails, which means most of values lie in a small range,
while only a few outliers have larger amplitude; (2) Asym-
metric : the density on the two tails of the distribution is
asymmetric, the skewness differ for different layers; (3)
Highly-dynamic : the activation range varies, or even by
twice, for different input samples. Therefore, the existing
PTQ methods which are designed for image classification
can not be transferred to the SR task directly.
In this paper, we propose a coarse-to-fine method to get
the accurate quantized SR model with post-training quan-
tization. We first introduce the density-based dual clipping
(DBDC) to cut off most of the outliers for narrowing the
distribution to a valid range. Different from previous meth-
ods [25, 40], the amplitudes of lower and upper clip are
not same and the clipping position is depend on the den-
sity of two tails. The clipping scheme is employed itera-
tively to eliminate the long-tail distribution. The asymmet-
ric quantizer with adjustable lower and upper clip values is
adopted to solve the asymmetric distribution in SR models.
And then we further propose a novel pixel-aware calibration
(PaC) to help the quantized network fit the highly dynamic
activations for different samples. The PaC leverages fea-
ture maps of the full-precision model to supervise those of
the quantized model. To stabilize the finetune process, we
only update the quantization parameters instead of the orig-
inal weights. The whole quantization process of our method
can be finished within minutes with a few unlabeled images.
The contributions of this paper are summarized as follow:
(1) We present a detailed analysis to demonstrate the
challenge of post-training quantization on image super reso-
lution, indicating that the performance degradation of quan-
tized SR model suffers from the long-tailed, asymmetric
and highly-dynamic distribution of feature maps.
(2) We introduce a coarse-to-fine quantization method
to accommodate above problems. With the density-based
dual clipping and the pixel-aware calibration, the proposed
method is able to conduct accurate quantization with only afew unlabeled calibration images. To the best of our knowl-
edge, we are the first to optimize the post-training quantiza-
tion for image super resolution task.
(3) Extensive experiments on various benchmark models
and datasets demonstrate that our method significantly out-
performs the existing PTQ methods, and is able to achieve
comparable performance with the QAT in some setting.
Further, our method can speed up the convergence and bring
up the performance when combined with QAT methods.
|
Song_Advancing_Visual_Grounding_With_Scene_Knowledge_Benchmark_and_Method_CVPR_2023 | Abstract
Visual grounding (VG) aims to establish fine-grained
alignment between vision and language. Ideally, it can be
a testbed for vision-and-language models to evaluate their
understanding of the images and texts and their reason-
ing abilities over their joint space. However, most existing
VG datasets are constructed using simple description texts,
which do not require sufficient reasoning over the images
and texts. This has been demonstrated in a recent study [27],
where a simple LSTM-based text encoder without pretrain-
ing can achieve state-of-the-art performance on mainstream
VG datasets. Therefore, in this paper, we propose a novel
benchmark of Scene Knowledge-guided Visual Grounding
(SK-VG), where the image content and referring expressions
are not sufficient to ground the target objects, forcing the
models to have a reasoning ability on the long-form scene
knowledge. To perform this task, we propose two approaches
to accept the triple-type input, where the former embeds
knowledge into the image features before the image-query
interaction; the latter leverages linguistic structure to assist
in computing the image-text matching. We conduct exten-
sive experiments to analyze the above methods and show
that the proposed approaches achieve promising results but
still leave room for improvement, including performance
and interpretability. The dataset and code are available at
https://github.com/zhjohnchan/SK-VG .
| 1. Introduction
Visual grounding (VG), aiming to locate an object re-
ferred to by a description phrase/text in an image, has
emerged as a prominent attractive research direction. It can
be applied to various tasks (e.g., visual question answering
[4,13,38,51] and vision-and-language navigation [1,11,35])
and also be treated as a proxy to evaluate machines for
*Equal contribution
†Corresponding author
ThemanJakeiscelebratingChristmaswithhisfriends.Jakeputshislefthandontheglassesandholdsawineglassinhisrighthand.HisfriendAlanissittingonhisrightinasuit.Carol,Jake'sanotherfriend,standsonJake'sleftwithawineglassinhislefthand,grinningwithwhiteteeth.Q1:TheblackglassesQ2:Jake’swineglass
(Traditional)(SK-VG(Ours))
ImageQuerySceneKnowledge
Figure 1. An example from the proposed SK-VG dataset for scene
knowledge-guided visual grounding. The task requires a model to
reason over the (image, scene knowledge, query) triple to locate
the target object referred to by the query.
open-ended scene recognition. Typically, VG requires mod-
els to reason over vision and language and build connec-
tions through single-modal understanding and cross-modal
matching. Yet, current VG benchmarks (e.g., RefCOCO [47],
RefCOCO+ [47], RefCOCOg [29], ReferItGame [17], and
CLEVR-Ref+ [23]) can not serve as a good test bed to eval-
uate the reasoning ability since they only focus on simple
vision-language alignment. In addition to the simple nature
of constructed referring expressions, this can be reflected in
the recent state-of-the-art study [27], where they showed that
VG models are less affected by language modeling through
extensive empirical analyses .
In this paper, we believe that the intrinsic difficulty of VG
lies in the difference between perceptual representations of
images and cognitive representations of texts. Specifically, vi-
sual features are obtained through perceptual learning, which
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15039
ImageCategoryModel(V)BoundingBox(many)
(i)GroundingCategories(e.g.,PASCAL VOC 2007,MS-COCO)ImagePhraseModel(V+L)BoundingBox(many)
(ii)GroundingPhrases(e.g.,Flickr30k Entities)ImageLinguistic ExpressionModel(V+L)BoundingBox(exactone)
(iii)GroundingExpressions(e.g.,RefCOCO,CLEVR-Ref+,Cops-Ref)ImageLinguistic Expression+SceneKnowledgeModel(V+L)BoundingBox(exactone)
(iv)GroundingExpressionsGivenSceneKnowledge(Ours)Figure 2. Illustrations of four categories of grounding tasks, including categories, phrases, linguistic expressions, and linguistic expres-
sion+scene knowledge. The height of the input green and blue rectangles denotes its relative information.
only maps visual appearances in images to semantic con-
cepts. However, open-ended queries might require VG mod-
els to understand the whole scene knowledge before perform-
ing reasoning to locate the target object. As shown in Figure
1, the perceptual features can encode the information about
“a wine glass ”, but it would struggle to locate “ Jake’s wine
glass ” without the scene knowledge about “ who is Jake? ”.
This is a challenging task owing to two facts: (i) From the
dataset perspective, there are no relevant benchmarks for
the VG researchers to evaluate their models; (ii) From the
model/algorithm perspective, it is not easy to design models
to perform reasoning among images, scene knowledge, and
open-ended querying texts.
Therefore, we propose to break this limitation of cur-
rent VG research and construct a new benchmark requir-
ingVGto perform reasoning over Scene Knowledge (i.e.,
text-based stories). The benchmark named SK-VG contains
∼40,000 referring expressions and 8,000 scene stories from
4,000 images, where each image contains 2 scene stories
with 5 referring expressions for each story. Moreover, to
evaluate the difficulty levels of queries, we curate the test
set by splitting the samples into easy/medium/hard cate-
gories to provide a detailed evaluation of the vision-language
models. Under this new setting, we develop a one-stage ap-
proach (i.e., Knowledge-embedded Vision-Language Inter-
action (KeViLI)) and a two-stage approach (i.e., Linguistic-
enhanced Vision-Language Matching (LeViLM)). In KeViLI,
the scene knowledge is firstly embedded into the image fea-
tures, and then the interaction between the image and the
query is performed; In LeViLM, the image features and
the text features are first extracted, and then the match-
ing between the (image) regions and the (text) entities are
computed, assisted by the structured linguistic information.
Through extensive experiments, we show that the proposed
approaches can achieve the best performance but still leave
room for improvement, especially in the hard split. It chal-
lenges the models from three perspectives: First, it is anopen-ended grounding task; Second, the scene stories are
long narratives consisting of multiple sentences; Third, it
might require the multi-hop reasoning ability of the models.
In summary, the contributions of this paper are three-fold:
•We introduce a challenging task that requires VG mod-
els to reason over (image, scene knowledge, query)
triples and build a new dataset named SK-VG on top of
real images through manual annotations.
•We propose two approaches to enhance the reasoning in
SK-VG, i.e., one one-stage approach KeViLI and one
two-stage approach LeViLM.
•Extensive experiments demonstrate the effectiveness of
the proposed approaches. Further analyses and discus-
sions could be a good starting point for future study in
the vision-and-language field.
|
Tu_Learning_From_Noisy_Labels_With_Decoupled_Meta_Label_Purifier_CVPR_2023 | Abstract
Training deep neural networks (DNN) with noisy labels
is challenging since DNN can easily memorize inaccurate
labels, leading to poor generalization ability. Recently,
the meta-learning based label correction strategy is widely
adopted to tackle this problem via identifying and correcting
potential noisy labels with the help of a small set of clean
validation data. Although training with purified labels can
effectively improve performance, solving the meta-learning
problem inevitably involves a nested loop of bi-level opti-
mization between model weights and hyper-parameters (i.e.,
label distribution). As compromise, previous methods re-
sort to a coupled learning process with alternating update.
In this paper, we empirically find such simultaneous opti-
mization over both model weights and label distribution
can not achieve an optimal routine, consequently limiting
the representation ability of backbone and accuracy of cor-
rected labels. From this observation, a novel multi-stage
label purifier named DMLP is proposed. DMLP decou-
ples the label correction process into label-free representa-
tion learning and a simple meta label purifier, In this way,
DMLP can focus on extracting discriminative feature and
label correction in two distinctive stages. DMLP is a plug-
and-play label purifier, the purified labels can be directly
reused in naive end-to-end network retraining or other ro-
bust learning methods, where state-of-the-art results are
obtained on several synthetic and real-world noisy datasets,
especially under high noise levels. Code is available at
https://github.com/yuanpengtu/DMLP .
| 1. Introduction
Deep learning has achieved significant progress on vari-
ous recognition tasks. The key to its success is the availabil-
* Equal contribution.
†Corresponding author.
Figure 1. (a) Traditional coupled alternating update to solve meta
label purification problem, and (b) the proposed DMLP method
that decouples the label purification process into representation
learning and a simple non-nested meta label purifier.
ity of large-scale datasets with reliable annotations. Collect-
ing such datasets, however, is time-consuming and expensive.
Easy ways to obtain labeled data, such as web crawling [31],
inevitably yield samples with noisy labels, which is not
appropriate to be directly utilized to train DNN since these
complex models are vulnerable to memorize noisy labels [2].
Towards this problem, numerous Learning with Noisy La-
bel (LNL) approaches were proposed. Classical LNL meth-
ods focus on identifying the noisy samples and reducing their
effect on parameter updates by abandoning [12] or assigning
smaller importance. However, when it comes to extremely
noisy and complex scenarios, such scheme struggles since
there is no sufficient clean data to train a discriminative clas-
sifier. Therefore, label correction approaches are proposed
to augment clean training samples by revising noisy labels to
underlying correct ones. Among them, meta-learning based
approaches [9, 16, 25] achieve state-of-the-art performance
via resorting to a small clean validation set and taking noisy
labels as hyper-parameters, which provides sound guidance
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19934
toward underlying label distribution of clean samples. How-
ever, such meta purification inevitably involves a nested
bi-level optimization problem on both model weight and
hyper-parameters (shown as Fig. 1 (a)), which is computa-
tionally infeasible. As a compromise, the alternating update
between model weights and hyper-parameters is adopted
to optimize the objective [9, 16, 25], leading to a coupled
solution for representation learning and label purification.
Empirical observation. Intuitively, alternate optimiza-
tion over a large search space (model weight and hyper-
parameters) may lead to sub-optimal solutions. To investi-
gate how such approximation affects results in robust learn-
ing, we conduct empirical analysis on CIFAR-10 [14] with
recent label purification methods MLC [41] and MSLC [9],
which consist of a deep model and a meta label correction
network, and make observation as Fig. 2.
•Coupled optimization hinders quality of corrected la-
bels. We first compare the Coupled meta corrector MLC
with its extremely Decoupled variant where the model
weights are first optimized for 70epochs with noisy labels
and get fixed, then labels are purified with the guidance of
validation set. We adopt the accuracy of corrected label
to measure the performance of purification. From Fig. 2
(a), we can clearly observe that compared with Decoupled
counterpart, joint optimization yields inferior correction per-
formance, and these miscorrection will reversely affect the
representation learning in coupled optimization.
•Coupled optimization hinders representation ability.
We investigate the representation quality by evaluating the
linear prob accuracy [6] of extracted feature in Fig. 2 (b).
We find the representation quality of Coupled training is
much worse at the beginning, which leads to slow and un-
stable representation learning in the later stage. To further
investigate the effect on representation learning, we also
resort to a well pretrained backbone with self-supervised
learning [5] as initialization, recent research [40] shows pre-
trained representation is substantially helpful for LNL frame-
work. However, we find this conclusion does not strictly
hold for coupled meta label correctors. As shown in Fig. 2
(c), by comparing the classification accuracy from classi-
fier of MLC/MSLC, we observe the pretrained model only
brings marginal improvement if model weights is still cou-
pled with hyper-parameters. In contrast, when the weight of
backbone is fixed and decoupled from the label purification
and classifier, the improvement becomes more significant.
Decoupled Meta Purification. From the observation
above, we find the decoupling between model weights and
hyperparameters of meta correctors is essential to label ac-
curacy and final results. Therefore, in this paper, we aim
at detaching the meta label purification from representation
learning and designing a simple meta label purifier which
is more friendly to optimization of label distribution prob-
lem than existing complex meta networks [9, 41]. Hencewe propose a general multi-stage label correction strategy,
named Decoupled Meta Label Purifier (DMLP). The core
of DMLP is a meta-learning based label purifier, however,
to avoid solving the bi-level optimization with a coupled
solution, DMLP decouples this process into self-supervised
representation learning and a linear meta-learner to fit un-
derlying correct label distribution (illustrated as Fig. 1 (b)),
thus simplifies the label purification stage as a single-level
optimization problem. The simple meta-learner is carefully
designed with two mutually reinforcing correcting processes,
named intrinsic primary correction (IPC) and extrinsic aux-
iliary correction (EAC) respectively. IPC plays the role of
purifying labels in a global sense at a steady pace, while EAC
targets at accelerating the purification process via looking
ahead (i.e., training with) the updated labels from IPC. The
two processes can enhance the ability of each other and form
a positive loop of label correction. Our DMLP framework
is flexible for application, the purified labels can either be
directly applied for naive end-to-end network retraining, or
exploited to boost the performance of existing LNL frame-
works. Extensive experiments conducted on mainstream
benchmarks, including synthetic (noisy versions of CIFAR)
and real-world (Clothing1M) datasets, demonstrate the supe-
riority of DMLP. In a nutshell, the key contributions of this
paper include:
•We analyze the necessity of decoupled optimization
for label correction in robust learning, based on which we
propose DMLP, a flexible and novel multi-stage label purifier
that solves bi-level meta-learning problem with a decoupled
manner, which consists of representation learning and non-
nested meta label purification;
•In DMLP, a novel non-nested meta label purifier
equipped with two correctors, IPC and EAC is proposed.
IPC is a global and steady corrector, while EAC accelerates
the correction process via training with the updated labels
from IPC. The two processes form a positive training loop
to learn more accurate label distribution;
•Deep models trained with purified labels from DMLP
achieve state-of-the-art results on several synthetic and real-
world noisy datasets across various types and levels of label
noise, especially under high noise levels. Extensive ablation
studies are provided to verify the effectiveness.
|
Stathopoulos_Learning_Articulated_Shape_With_Keypoint_Pseudo-Labels_From_Web_Images_CVPR_2023 | Abstract
This paper shows that it is possible to learn models for
monocular 3D reconstruction of articulated objects ( e.g.
horses, cows, sheep), using as few as 50-150 images la-
beled with 2D keypoints. Our proposed approach involves
training category-specific keypoint estimators, generating
2D keypoint pseudo-labels on unlabeled web images, and
using both the labeled and self-labeled sets to train 3D re-
construction models. It is based on two key insights: (1)
2D keypoint estimation networks trained on as few as 50-
150 images of a given object category generalize well and
generate reliable pseudo-labels; (2) a data selection mech-
anism can automatically create a “curated” subset of the
unlabeled web images that can be used for training – we
evaluate four data selection methods. Coupling these two
insights enables us to train models that effectively utilize
web images, resulting in improved 3D reconstruction per-
formance for several articulated object categories beyond
the fully-supervised baseline. Our approach can quickly
bootstrap a model and requires only a few images labeled
with 2D keypoints. This requirement can be easily satisfied
for any new object category. To showcase the practicality
of our approach for predicting the 3D shape of arbitrary
object categories, we annotate 2D keypoints on 250 giraffe
and bear images from COCO in just 2.5 hours per category.
| 1. Introduction
Predicting the 3D shape of an articulated object from
a single image is a challenging task due to its under-
constrained nature. Various successful approaches [14, 19]
have been developed for inferring the 3D shape of humans.
These approaches rely on strong supervision from 3D joint
locations acquired using motion capture systems. Similar
breakthroughs for other categories of articulated objects,
such as animals, remain elusive. This is primarily due to the
scarcity of appropriate training data. Some works (such as
CMR [15]) learn to predict 3D shapes using only 2D labels
for supervision. However, for most object categories even
train(a) Train KeypointEstimator
Labeled set 𝒮inference(b) Generate Pseudo-labels (PLs)𝒰w/ PLs(c) Data Selection𝒰′(d) Train 3D Shape Predictor
Unlabeled set 𝒰
𝒰w/ PLs
𝒰′𝒮∪{}trainℎ$ℎ$𝑓%Figure 1. Overview of the proposed framework . It includes:
(a) training a category-specific keypoint estimator with a limited
labeled set S, (b) generating keypoints pseudo-labels on web im-
ages, (c) automatic curation of web images to create a subset U′,
and (d) training a model for 3D shape prediction with images from
SandU′.
2D labels are limited or non-existent. We ask: how can we
learn models that predict the 3D shape of articulated ob-
jects in-the-wild when limited or no annotated images are
available for a given object category?
In this paper we propose an approach that requires as few
as 50-150 images labeled with 2D keypoints. This labeled
set can be easily and quickly created for any object category.
Our proposed approach is illustrated in Figure 1 and sum-
marized as follows: (a) train a category-specific keypoint
estimation network using a small set Sof images labeled
with 2D keypoints; (b) generate 2D keypoint pseudo-labels
on a large unlabeled set Uconsisting of automatically ac-
quired web images; (c) automatically curate Uby creating
a subset of images and pseudo-labels U′according to a se-
lection criterion; (d) train a model for 3D shape prediction
with data from both SandU′.
A key insight is that current 2D keypoint estimators
[30, 34, 38] are accurate enough to create robust 2D key-
point detections on unlabeled data, even when trained with
a limited number of images. Another insight is that images
fromUincrease the variability of several factors, such as
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13092
UUφ,θS
US
Uφ,ψ
/bardbl−/bardbl2
hφ
hφ
fθ
fθ(c) Cross-Model Consistency(d) Cross-Model & Cross-Modality Consistency(a) CF-CM(b) CF-CM2
gψ/bardbl()−/bardbl2Figure 2. Given a small set Sof images labeled with 2D keypoints, we train a 2D keypoint estimation netowrk hϕand generate keypoint
pseudo-labels on web images (set U). We select a subset of Uto train a 3D shape predictor fθ. Two methods for data selection can be seen
here: (a) CF-CM : an auxiliary 2D keypoint estimator gψgenerates predictions on Uand images with the smallest discrepancy between
the keypoint estimates of hϕandgψare selected (criterion (c)); (b) CF-CM2:fθis trained with samples from Sand generates predictions
onU. Images with the smallest discrepancy between the keypoint estimates of hϕandfθare selected (criterion (d)) to retrain fθ.
camera viewpoints, articulations and image backgrounds,
that are important for training generalizable models for 3D
shape prediction. However, the automatically acquired web
images contain a high proportion of low-quality images
with wrong or heavy truncated objects. Naively using all
web images and pseudo-labels during training leads to de-
graded performance as can be seen in our experiments in
Section 4.2. While successful pseudo-label (PL) selection
techniques [4, 5, 23, 29, 40] exist for various tasks, they
do not address the challenges in our setting. These works
investigate PL selection when the unlabeled images come
from curated datasets ( e.g. CIFAR-10 [20], Cityscapes [6]),
while in our setting they come from the web. In addition,
they eventually use all unlabeled images during training
while in our case most of the web images should be dis-
carded. To effectively utilize images from the web we inves-
tigate four criteria to automatically create a “curated” sub-
set that includes images with high-quality pseudo-labels.
These contain a confidence-based criterion as well as three
consistency-based ones (see Figure 2 for two examples).
Through extensive experiments on five different articu-
lated object categories (horse, cow, sheep, giraffe, bear) and
three public datasets, we demonstrate that training with the
proposed data selection approaches leads in considerably
better 3D reconstructions compared to the fully-supervised
baseline. Using all pseudo-labels leads to degraded perfor-
mance. We analyze the performance of the data selection
methods used and conclude that consistency-based selec-
tion criteria are more effective in our setting. Finally, we
conduct experiments with varying number of images in the
labeled set S. We show that even with only 50 annotated
instances and images from web, we can train models that
lead to better 3D reconstructions than the fully-supervised
models trained with more labels. |
Tang_Visual_Recognition_by_Request_CVPR_2023 | Abstract
Humans have the ability of recognizing visual semantics
in an unlimited granularity, but existing visual recognition
algorithms cannot achieve this goal. In this paper, we estab-
lish a new paradigm named visual recognition by request
(ViRReq1) to bridge the gap. The key lies in decompos-
ing visual recognition into atomic tasks named requests and
leveraging a knowledge base, a hierarchical and text-based
dictionary, to assist task definition. ViRReq allows for (i)
learning complicated whole-part hierarchies from highly
incomplete annotations and (ii) inserting new concepts with
minimal efforts. We also establish a solid baseline by in-
tegrating language-driven recognition into recent seman-
tic and instance segmentation methods, and demonstrate
its flexible recognition ability on CPP and ADE20K, two
datasets with hierarchical whole-part annotations.
| 1. Introduction
Visual recognition is one of the fundamental problems
in computer vision. In the past decade, visual recognition
algorithms have been largely advanced with the availability
of large-scale datasets and deep neural networks [ 8,13,18].
Typical examples include the ability of recognizing 10,000s
of object classes [ 6], segmenting objects into parts or even
parts of parts [ 43], using natural language to refer to open-
world semantic concepts [ 31],etc.
Despite the increasing recognition accuracy in standard
benchmarks, we raise a critical yet mostly uncovered issue,
namely, the unlimited granularity of visual recognition.
As shown in Figure 1, given an image, humans have the
ability of recognizing arbitrarily fine-grained contents from
it, but existing visual recognition algorithms cannot achieve
the same goal. Superficially, this is caused by limited anno-
tation budgets so that few training data is available for fine-
1We recommend the readers to pronounce ViRReq as /’virik/ .
eye, mouth, …~10% annotated
head, torso, …~40% annotated
person, bag, …~80% annotated
original imageregions & instancesparts & parts of parts
Person #1: head, torso, arms, legs, eyes, mouth, …uncertain boundary between head/armand torsoPerson #2:head, torso, arms, legs, eyes, mouth, …Person #3:head, torso, arms, legs, eyes, mouth, …Person ★:instance was not annotatedOther objects: bag, shoes, necklace, …overlaying
#1#2#3uncertainboundaryFigure 1. An illustration of unlimited granularity in visual recog-
nition. Top: an example image from ADE20K [ 43] with instance-
level and part-level annotations. Middle : more and more incom-
plete annotations of instances, parts, and parts of parts. Bottom :
as granularity goes finer, higher uncertainty occurs in recogniz-
ing the boundary (left) and semantic class (right) of objects and/or
parts – here, green ,blue andredtexts indicate labeled ,unlabeled
(but defined) , and unlabeled (and undefined) objects, respectively.
grained and/or long-tailed concepts, but we point out that a
more essential reason lies in the conflict between granular-
ity and recognition certainty – as shown in Figure 1,when
annotation granularity goes finer, annotation certainty will
inevitably be lower . This motivates us that the granular-
ity of visual recognition shall be variable across instances
and scenarios. For this purpose, we propose to interpret
semantic annotations into smaller units ( i.e., requests) and
assume that recognition is performed only when it is asked,
so that one can freely adjust the granularity according to
object size, importance, clearness, etc.
In this paper, we establish a new paradigm named visual
recognition by request (ViRReq ). We only consider the
segmentation task in this paper because it best fits the need
of unlimited granularity in the spatial domain. Compared to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version |
Song_Learning_With_Fantasy_Semantic-Aware_Virtual_Contrastive_Constraint_for_Few-Shot_Class-Incremental_CVPR_2023 | Abstract
Few-shot class-incremental learning (FSCIL) aims at
learning to classify new classes continually from limited
samples without forgetting the old classes. The mainstream
framework tackling FSCIL is first to adopt the cross-entropy
(CE) loss for training at the base session, then freeze the
feature extractor to adapt to new classes. However, in this
work, we find that the CE loss is not ideal for the base ses-
sion training as it suffers poor class separation in terms of
representations, which further degrades generalization to
novel classes. One tempting method to mitigate this prob-
lem is to apply an additional na ¨ıve supervised contrastive
learning (SCL) in the base session. Unfortunately, we find
that although SCL can create a slightly better representa-
tion separation among different base classes, it still strug-
gles to separate base classes and new classes. Inspired
by the observations made, we propose Semantic-Aware Vir-
tual Contrastive model (SAVC), a novel method that facili-
tates separation between new classes and base classes by
introducing virtual classes to SCL. These virtual classes,
which are generated via pre-defined transformations, not
only act as placeholders for unseen classes in the repre-
sentation space, but also provide diverse semantic infor-
mation. By learning to recognize and contrast in the fan-
tasy space fostered by virtual classes, our SAVC signifi-
cantly boosts base class separation and novel class gen-
eralization, achieving new state-of-the-art performance on
the three widely-used FSCIL benchmark datasets. Code is
available at: https://github.com/zysong0113/SAVC.
| 1. Introduction
Like humans, the modern artificial intelligence models
are expected to be equipped with the ability of learning
continually. For instance, a face recognition system needs
†: authors contributed equally.∗: corresponding author.
Base
Session
Few -Shot
Incremental
SessionCE CE+SCL SAVC(a) Comparison of the embedding spaces trained with different methods.
AP1
P2P1A
N2N1
N1
P2+
+-Different
Classes
N3
N4Fantasy
ClassesSame
Classes-
-
Al
Al-Local Crop
Global Crop
Negative Pairs
… Positive Pairs…Fantasy Samples
Update
…
Positive Selection Negative Selection
Original Feature Queue-N2Semantic-Aware
Fantasy
N3
N4
(b) Illustration of our Semantic-Aware Virtual Contrastive model (SA VC).
Figure 1. The motivation of the proposed approach. Under the
incremental-frozen framework in FSCIL, our SA VC learns to rec-
ognize and contrast in the fantasy space, which leads to better base
class separation and novel class generalization than CE and SCL.
to continually learn to recognize new faces without forget-
ting those has already kept in mind. In this case, Class-
Incremental Learning (CIL) [19, 25, 30, 37, 50] draws atten-
tion extensively, that sufficient and disjoint data are com-
ing session-by-session. However, it is unrealistic to always
rely on accessing of multitudinous data. Taking the case of
face recognition system again, whether it can quickly learn
a new concept with just a few images, is an important cri-
terion for evaluating its performance. As a result, design-
ing effective and efficient algorithms to resolve Few-Shot
Class-Incremental Learning (FSCIL) problem has drawn in-
creasing attention [6, 8, 12, 40, 44, 51, 55]. Unlike CIL, only
a few data are provided for FSCIL at incremental session
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24183
while sufficient samples are only visible at the base session.
The core challenge of CIL is to strike a balance of stabil-
ity and plasticity, i.e., suppressing forgetting on old knowl-
edge while adapting smoothly to new knowledge. Previ-
ous works on FSCIL have demonstrated that an incremen-
tally trained model is prone to overfit on limited new data
and suffer catastrophic forgetting [8, 12, 44, 54]. Thus the
mainstream FSCIL framework [40, 51, 58] is to first use
the cross-entropy (CE) loss for training in the base session,
then freeze the backbone to adapt to new classes. Given
such baseline framework, the main problem that needs to
be taken into consideration is: What makes a good base-
session-model that can generalize well to new classes with
limited data?
Intuitively, if all representations of base classes concen-
trate around their own clustering centers ( i.e., prototypes),
and all prototypes are far away from each other, then novel
classes should be easily incorporated into the current space
without overlapping. Therefore, we conjecture that base
class separation facilitates novel class generalization. How-
ever, the current widely adopted CE loss in FSCIL cannot
well separate the class margins, which causes poor gen-
eralization to new classes. This motivates our attempt at
methods with better clustering effects, such as supervised
contrastive learning (SCL) [21]. We conduct experiments
to validate our conjecture by training models optimized by
only CE loss, and both CE and SCL. We empirically find
that CE leads to lower base class separation and classifica-
tion accuracy than SCL. Although SCL indeed slightly im-
proves the base class separation with higher accuracy, we
find it implicitly decreases the inter-class distance. There-
fore, the two methods both leave inadequate room for future
updates, which leads to the inevitable overlapping between
novel classes and old classes in the representation space (as
shown in the left and middle columns of Fig. 1a).
The unsatisfactory performance of SCL implies that the
limited semantic information base session provided may
need our fantasy remedy. Although we have no access to
future classes in the base session, we can create many vir-
tual classes with various semantics filling the unallocated
representation space, which not only holds enough space
for future classes, but also enables the learning of diverse
semantics for better inference and generalization.
To this end, we propose a Semantic-Aware Virtual Con-
trastive (SA VC) model to improve the FSCIL by boosting
base class separation (see Fig. 1b). Our fantasy space is con-
structed with pre-defined transformations [11, 15, 35, 52],
which can be regarded as the semantic extension of the orig-
inal space in finer grains. The type of transformations rep-
resents the richness of our imagination, i.e., how compact
we want the embedding space to be and how fine the se-
mantic grains we want to acquire. Learning to contrast and
recognize in the fantasy space not only brings a better classseparation effect, but also provides richer semantic infor-
mation to improve the accuracy. In contrast to the CE base-
line and SCL, our SA VC can simultaneously decrease intra-
class distance and increase inter-class distance, which leads
to better base class separation and novel class generalization
(as given in the right of Fig. 1a). Extensive experiments on
various benchmark datasets further verify the superiority of
our method.
Our main contributions are summarized in three folds:
1) We empirically discover that it is crucial to boost class
separation degree in base session for the incremental-frozen
framework in FSCIL, which helps fast generalization for
novel classes with only a few samples.
2) We propose a novel Semantic-Aware Virtual Con-
trastive (SA VC) framework, enhancing the FSCIL learner
by learning to recognize and contrast in the fantasy space,
which preserves enough place for novel generalization and
enables a multi-semantic aggregated inference effect.
3) Extensive experiments on three FSCIL bench-
marks, i.e., CIFAR100, miniImageNet and CUB200,
demonstrate that our SA VC outperforms all approaches by a
large margin and achieves the state-of-the-art performance.
|
Streicher_BASiS_Batch_Aligned_Spectral_Embedding_Space_CVPR_2023 | Abstract
Graph is a highly generic and diverse representation,
suitable for almost any data processing problem. Spec-
tral graph theory has been shown to provide powerful algo-
rithms, backed by solid linear algebra theory. It thus can
be extremely instrumental to design deep network build-
ing blocks with spectral graph characteristics. For in-
stance, such a network allows the design of optimal graphs
for certain tasks or obtaining a canonical orthogonal low-
dimensional embedding of the data. Recent attempts to
solve this problem were based on minimizing Rayleigh-
quotient type losses. We propose a different approach of
directly learning the graph’s eigensapce. A severe prob-
lem of the direct approach, applied in batch-learning, is the
inconsistent mapping of features to eigenspace coordinates
in different batches. We analyze the degrees of freedom of
learning this task using batches and propose a stable align-
ment mechanism that can work both with batch changes and
with graph-metric changes. We show that our learnt spec-
tral embedding is better in terms of NMI, ACC, Grassman
distnace, orthogonality and classification accuracy, com-
pared to SOTA. In addition, the learning is more stable.
| 1. Introduction
Representing information by using graphs and analyz-
ing their spectral properties has been shown to be an effec-
tive classical solution in a wide range of problems including
clustering [8,21, 32], classification [13], segmentation [26],
dimensionality reduction [5, 10, 23] and more. In this set-
ting, data is represented by nodes of a graph, which are em-
bedded into the eigenspace of the graph-Laplacian, a canon-
ical linear operator measuring local smoothness.
Incorporating analytic data structures and methods
within a deep learning framework has many advantages.
It yields better transparency and understanding of the net-
work, allows the use of classical ideas, which were thor-
oughly investigated and can lead to the design of new ar-
chitectures, grounded in solid theory. Spectral graph algo-rithms, however, are hard to incorporate directly in neural-
networks since they require eigenvalue computations which
cannot be integrated in back-propagation training algo-
rithms. Another major drawback of spectral graph tools is
their low scalability. It is not feasible to hold a large graph
containing millions of nodes and to compute its graph-
Laplacian eigenvectors. Moreover, updating the graph with
additional nodes is combersome and one usually resorts to
graph-interpolation techniques, referred to as Out Of Sam-
ple Extension (OOSE) methods.
An approach to solve the above problems using deep
neural networks (DNNs), firstly suggested in [24] and re-
cently also in [9], is to train a network that approximates the
eigenspace by minimizing Rayleigh quotient type losses.
The core idea is that the Rayleigh quotient of a sum of nvec-
tors is minimized by the neigenvectors with the correspond-
ingnsmallest eigenvalues. As a result, given the features
of a data instance (node) as input, these networks generate
the respective coordinate in the spectral embedding space.
This space should be equivalent in some sense to the ana-
lytically calculated graph-Laplacian eigenvector space. A
common way to measure the equivalence of these spaces is
using the Grassman distance. Unfortunately, applying this
indirect approach does not guarantee convergence to the de-
sired eigenspace and therefore the captured might not be
faithful.
An alternative approach, suggested in [18] for computing
the diffusion map embedding, is a direct supervised method.
The idea is to compute the embedding analytically, use it
as ground-truth and train the network to map features to
eigenspace coordinates in a supervised manner. In order to
compute the ground truth embedding, the authors used the
entire training set. This operation is very demanding com-
putationally in terms of both memory and time and is not
scalable when the training set is very large.
Our proposed method is to learn directly the eigenspace
in batches. We treat each batch as sampling of the full graph
and learn the eigenvector values in a supervised manner.
A major problem of this kind of scheme is the i |
Speth_Non-Contrastive_Unsupervised_Learning_of_Physiological_Signals_From_Video_CVPR_2023 | Abstract
Subtle periodic signals such as blood volume pulse and
respiration can be extracted from RGB video, enabling non-
contact health monitoring at low cost. Advancements in
remote pulse estimation – or remote photoplethysmogra-
phy (rPPG) – are currently driven by deep learning solu-
tions. However, modern approaches are trained and evalu-
ated on benchmark datasets with ground truth from contact-
PPG sensors. We present the first non-contrastive unsuper-
vised learning framework for signal regression to mitigate
the need for labelled video data. With minimal assumptions
of periodicity and finite bandwidth, our approach discov-
ers the blood volume pulse directly from unlabelled videos.
We find that encouraging sparse power spectra within nor-
mal physiological bandlimits and variance over batches of
power spectra is sufficient for learning visual features of
periodic signals. We perform the first experiments utilizing
unlabelled video data not specifically created for rPPG to
train robust pulse rate estimators. Given the limited induc-
tive biases and impressive empirical results, the approach is
theoretically capable of discovering other periodic signals
from video, enabling multiple physiological measurements
without the need for ground truth signals.
| 1. Introduction
Camera-based vitals estimation is a rapidly growing field
enabling non-contact health monitoring in a variety of set-
tings [23]. Although many of the signals avoid detection
from the human eye, video data in the visible and infrared
ranges contain subtle intensity changes caused by physio-
logical oscillations such as blood volume and respiration.
Significant remote photoplethysmography (rPPG) research
for estimating the cardiac pulse has leveraged supervised
deep learning for robust signal extraction [7, 18, 30, 38, 51,
52]. While the number of successful approaches has rapidly
increased, the size of benchmark video datasets with simul-
taneous vitals recordings has remained relatively stagnant.
Robust deep learning-based systems for deployment re-
quire training on larger volumes of video data with di-verse skin tones, lighting, camera sensors, and movement.
However, collecting simultaneous video and physiologi-
cal ground truth with contact-PPG or electrocardiograms
(ECG) is challenging for several reasons. First, many hours
of high quality videos is an unwieldy volume of data. Sec-
ond, recording a diverse subject population in conditions
representative of real-world activities is difficult to conduct
in the lab setting. Finally, synchronizing contact measure-
ments with video is technically challenging, and even con-
tact measurements used for ground truth contain noise.
Fortunately, recent works find that contrastive unsuper-
vised learning for rPPG is a promising solution to the data
scarcity problem [13, 42, 45, 50]. We extend this research
line into non-contrastive unsupervised learning to discover
periodic signals in video data. With end-to-end unsuper-
vised learning, collecting more representative training data
to learn powerful visual features is much simpler, since only
video is required without associated medical information.
In this work, we show that non-contrastive unsupervised
learning is especially simple when regressing rPPG signals.
We find weak assumptions of periodicity are sufficient for
learning the minuscule visual features corresponding to the
blood volume pulse from unlabelled face videos. The loss
functions can be computed in the frequency domain over
batches without the need for pairwise or triplet compar-
isons. Figure 1 compares the proposed approach with su-
pervised and contrastive unsupervised learning approaches.
This work creates opportunities for scaling deep learning
models for camera-based vitals and estimating periodic or
quasi-periodic signals from unlabelled data beyond rPPG.
Ournovel contributions are:
1. A general framework for physiological signal estima-
tion via non-contrastive unsupervised learning (SiNC)
by leveraging periodic signal priors.
2. The first non-contrastive unsupervised learning
method for camera-based vitals measurement.
3. The first experiments and results of training with non-
rPPG-specific video data without ground truth vitals.
Source code to replicate this work is available at https:
//github.com/CVRL/SiNC-rPPG .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14464
𝑥"!𝑥""𝑥𝜙" ~ Φ 𝜙! ~ Φ Invariant ViewEquivariant View𝑦"𝑦!Waveform PredictionSpectral Prediction𝐹𝐹𝑇(+)𝐹𝐹𝑇(+)𝐹"𝐹!𝑥""#𝑥"!#𝜏" ~ Τ𝜏! ~ ΤSupervisedlearning framework𝜙 ~ Φ 𝜏 ~ Τ𝐹𝐹𝑇(+)𝑥"𝑥𝑦𝑥"#𝐿𝐹=𝐿$+𝐿% +𝐿& 𝐹𝑓(+)𝑓(+)𝑓(+)𝑥"𝑥𝜙 ~ Φ 𝑦𝑥"#𝜏 ~ Τ𝑦'(𝑓(+)
t
𝐿(𝐹",𝐹!)𝐿(𝑦,𝑦'()
bandwidth (𝐿$)
sparsity (𝐿%)<=>batch-wise variance (𝐿&)Unsupervised contrastive learningframeworkUnsupervised non-contrastive learningframework(proposed)Figure 1. Overview of the SiNC framework for rPPG compared with traditional supervised and unsupervised learning. Supervised and
contrastive losses use distance metrics to the ground truth or other samples. Our framework applies the loss directly to the prediction by
shaping the frequency spectrum, and encouraging variance over a batch of inputs. Power outside of the bandlimits is penalized to learn
invariances to irrelevant frequencies. Power within the bandlimits is encouraged to be sparsely distributed near the peak frequency.
|
Tian_Manipulating_Transfer_Learning_for_Property_Inference_CVPR_2023 | Abstract
Transfer learning is a popular method for tuning pre-
trained (upstream) models for di fferent downstream tasks
using limited data and computational resources. We study
how an adversary with control over an upstream model used
in transfer learning can conduct property inference attacks
on a victim’s tuned downstream model. For example, to in-
fer the presence of images of a specific individual in the
downstream training set. We demonstrate attacks in which
an adversary can manipulate the upstream model to con-
duct highly e ffective and specific property inference attacks
(AUC score >0.9), without incurring significant perfor-
mance loss on the main task. The main idea of the ma-
nipulation is to make the upstream model generate acti-
vations (intermediate features) with di fferent distributions
for samples with and without a target property, thus en-
abling the adversary to distinguish easily between down-
stream models trained with and without training examples
that have the target property. Our code is available at
https:// github.com/ yulongt23/ Transfer-Inference .
| 1. Introduction
Transfer learning is a popular method for e fficiently
training deep learning models [6, 21, 33, 39, 42]. In a typ-
ical transfer learning scenario, an upstream trainer trains
and releases a pretrained model. Then a downstream trainer
will reuse the parameters of some layers of the released
upstream models to tune a downstream model for a par-
ticular task. This parameter reuse reduces the amount of
data and computing resources required for training down-
stream models significantly, making this technique increas-
ingly popular. However, the centralized nature of transfer
learning is open to exploitation by an adversary. Several
previous works have considered security risks associated
with transfer learning including backdoor attacks [39] and
misclassification attacks [33].
*Indicates the corresponding author.We investigate the risk of property inference in the
context of transfer learning. In property inference (also
known as distribution inference ), the attacker aims to ex-
tract sensitive properties of the training distribution of a
model [3,7,12,29,41]. We consider a transfer learning sce-
nario where the upstream trainer is malicious and produces
a carefully crafted pretrained model with the goal of infer-
ring a particular property about the tuning data used by the
victim to train a downstream model. For example, the at-
tacker may be interested in knowing whether any images of
a specific individual (or group, such as seniors or Asians)
are contained in a downstream training set used to tune the
pre-trained model. Such inferences can lead to severe pri-
vacy leakage—for instance, if the adversary knows before-
hand that the downstream training set consists of data of pa-
tients that have a particular disease, confirming the presence
of a specific individual in that training data is a privacy vi-
olation. Property inference may also be used to audit mod-
els for fairness issues [22]—for example, in a downstream
dataset containing data of all the employees of an organi-
zation, finding the absence of samples of a certain group of
people (e.g., older people) may be evidence that those peo-
ple are underrepresented in that organization.
Contributions. We identify a new vulnerability of trans-
fer learning where the upstream trainer crafts a pretrained
model to enable an inference attack on the downstream
model that reveals very precise and accurate information
about the downstream training data (Section 3). We develop
methods to manipulate the upstream model training to pro-
duce a model that, when used to train a downstream model,
will induce a downstream model that reveals sensitive prop-
erties of its training data in both white-box and black-box
inference settings (Section 4). We demonstrate that this
substantially increases property inference risk compared to
baseline settings where the upstream model is trained nor-
mally (Section 7). Table 1 summarizes our key results. The
inference AUC scores are below 0.65 when the upstream
models are trained normally; after manipulation, the infer-
ences have AUC scores ≥0.89 even when only 0.1% (10
out of 10 000) of downstream samples have the target prop-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15975
Downstream Task Upstream Task Target PropertyNormal Upstream Model Manipulated Upstream Model
0.1% (10) 1% (100) 0.1% (10) 1% (100)
Gender Recognition Face Recognition
Specific Individuals0.49 0.52 0.96 1.0
Smile Detection ImageNet Classification [9] 0.50 0.50 1.0 1.0
Age Prediction ImageNet Classification [9] 0.54 0.63 0.97 1.0
Smile Detection ImageNet Classification [9] Senior 0.59 0.56 0.89 1.0
Age Prediction ImageNet Classification [9] Asian 0.49 0.65 0.95 1.0
Table 1. Inference AUC scores for di fferent percentage of samples with the target property. Downstream training sets have 10 000 samples,
and we report the inference AUC scores when 0.1% (10) and 1% (100) samples in the downstream set have the target property. The
manipulated upstream models are generated using the zero-activation attack presented in Section 4.
erty and achieve perfect results (AUC score =1.0) when the
ratio increases to 1%. The manipulated models have negli-
gible performance drops ( <0.9%) on their intended tasks.
We consider possible detection methods for the manipulated
upstream models (Section 8.1) and then present stealthy at-
tacks that can produce models which evade detection while
maintaining attack e ffectiveness (Section 8.2).
|
Song_EcoTTA_Memory-Efficient_Continual_Test-Time_Adaptation_via_Self-Distilled_Regularization_CVPR_2023 | Abstract
This paper presents a simple yet effective approach that
improves continual test-time adaptation (TTA) in a memory-
efficient manner. TTA may primarily be conducted on edge
devices with limited memory, so reducing memory is cru-
cial but has been overlooked in previous TTA studies. In
addition, long-term adaptation often leads to catastrophic
forgetting and error accumulation, which hinders apply-
ing TTA in real-world deployments. Our approach con-
sists of two components to address these issues. First, we
present lightweight meta networks that can adapt the frozen
original networks to the target domain. This novel archi-
tecture minimizes memory consumption by decreasing the
size of intermediate activations required for backpropaga-
tion. Second, our novel self-distilled regularization controls
the output of the meta networks not to deviate significantly
from the output of the frozen original networks, thereby
preserving well-trained knowledge from the source domain.
Without additional memory, this regularization prevents er-
ror accumulation and catastrophic forgetting, resulting in
stable performance even in long-term test-time adaptation.
We demonstrate that our simple yet effective strategy out-
performs other state-of-the-art methods on various bench-
marks for image classification and semantic segmentation
tasks. Notably, our proposed method with ResNet-50 and
WideResNet-40 takes 86% and 80% less memory than the
recent state-of-the-art method, CoTTA.
| 1. Introduction
Despite recent advances in deep learning [14, 21, 20, 19],
deep neural networks often suffer from performance degra-
dation when the source and target domains differ signifi-
cantly [8, 36, 31]. Among several tasks addressing such
domain shifts, test-time adaptation (TTA) has recently re-
ceived a significant amount of attention due to its practi-
cality and wide applicability especially in on-device set-
*Work done during an internship at Qualcomm AI Research.
†Corresponding author.‡Qualcomm AI Research is an initiative of
Qualcomm Technologies, Inc.
Memory (MB)CIFAR100-C Error (%)CIFAR10-C Error (%)ResNet-50
WideResNet-40CoTTASWR&NSPTTT++Con6nual TENTSingle domain TENTEATA
CoTTASWR&NSPTTT++EATA
NOTEMemory (MB) 045090013501800ParamAc6va6on
86%72%0100200300400ParamAc6va6on
TENT/EATA
CoTTA
Ours
80%59%(a)(b)
Con6nual TENTSingle domain TENT
ResNet-50WideResNet-40
Ours (K=4)Ours (K=5)
���������������������Figure 1. (a) Memory cost comparison between TTA methods .
The size of activations, not the parameters, is the primary mem-
ory bottleneck during training. (b) CIFAR-C adaptation perfor-
mance. We perform the continual online adaptation on CIFAR-C
dataset. The x- and y-axis are the average error of all corruptions
and the total memory consumption including the parameters and
activations, respectively. Our approach, EcoTTA, achieves the best
results while consuming the least amount of memory, where K is
the model partition factor used in our method.
tings [53, 35, 26, 15]. This task focuses on adapting the
model to unlabeled online data from the target domain with-
out access to the source data.
While existing TTA methods show improved TTA per-
formances, minimizing the sizes of memory resources have
been relatively under-explored, which is crucial considering
the applicability of TTA in on-device settings. For example,
several studies [54, 35, 9] update entire model parameters
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11920
TENT𝐷!EATA𝐷!weight regularization: freeze: update: meta networks (Ours)randomrestorationCoTTAOurs (EcoTTA)𝐷!𝐷!transformmovingaveragebnbnbnconv blockmain networksmain networkssource modelmain networksconv blockconv blockteacher modelsource model𝐷!:unlabeledonlinetestdata
…bn…bnFigure 2. Architecture for test-time adaptation. We illustrate TTA methods: TENT [53], EATA [41], CoTTA [54], and Ours (EcoTTA).
TENT and EATA update multiple batch norm layers, in which large activations have to be stored for gradient calculation. In CoTTA, an
entire network is trained with additional strategies for continual adaptation that requires a significant amount of both memory and time. In
contrast, our approach requires a minimum size of activations by updating only a few layers. Also, stable long-term adaptation is performed
by our proposed regularization, named self-distilled regularization.
to achieve large performance improvements, which may be
impractical when the available memory sizes are limited.
Meanwhile, several TTA approaches update only the batch
normalization (BN) parameters [53, 41, 16] to make the
optimization efficient and stable However, even updating
only BN parameters is not memory efficient enough since
the amount of memory required for training models signifi-
cantly depends on the size of intermediate activations rather
than the learnable parameters [4, 13, 57]. Throughout the
paper, activations refer to the intermediate features stored
during the forward propagation, which are used for gradi-
ent calculations during backpropagation. Fig. 1 (a) demon-
strates such an issue.
Moreover, a non-trivial number of TTA studies assume
a stationary target domain [53, 35, 9, 48], but the target do-
main may continuously change in the real world ( e.g., con-
tinuous changes in weather conditions, illuminations, and
location [8] in autonomous driving). Therefore, it is nec-
essary to consider long-term TTA in an environment where
the target domain constantly varies. However, there exist
two challenging issues: 1) catastrophic forgetting [54, 41]
and 2) error accumulation. Catastrophic forgetting refers to
degraded performance on the source domain due to long-
term adaptation to target domains [54, 41]. Such an issue
is important since the test samples in the real world may
come from diverse domains, including the source and tar-
get domains [41]. Also, since target labels are unavailable,
TTA relies on noisy unsupervised losses, such as entropy
minimization [17], so long-term continual TTA may lead to
error accumulation [63, 2].
To address these challenges, we propose memory-
Efficient continual Test-TimeAdaptation (EcoTTA), a sim-
ple yet effective approach for 1) enhancing memory effi-
ciency and 2) preventing catastrophic forgetting and error
accumulation. First, we present a memory-efficient archi-
tecture consisting of frozen original networks and our pro-
posed meta networks attached to the original ones. During
the test time, we freeze the original networks to discard the
intermediate activations that occupy a significant amount of
memory. Instead, we only adapt lightweight meta networksto the target domain, composed of only one batch normal-
ization and one convolution block. Surprisingly, updating
only the meta networks, not the original ones, can result in
significant performance improvement as well as consider-
able memory savings. Moreover, we propose a self-distilled
regularization method to prevent catastrophic forgetting and
error accumulation. Our regularization leverages the pre-
served source knowledge distilled from the frozen original
networks to regularize the meta networks. Specifically, we
control the output of the meta networks not to deviate from
the one extracted by the original networks significantly. No-
tably, our regularization leads to negligible overhead be-
cause it requires no extra memory and is performed in par-
allel with adaptation loss, such as entropy minimization.
Recent TTA studies require access to the source data be-
fore model deployments [35, 9, 28, 1, 33, 41]. Similarly, our
method uses the source data to warm up the newly attached
meta networks for a small number of epochs before model
deployment. If the source dataset is publicly available or
the owner of the pre-trained model tries to adapt the model
to a target domain, access to the source data is feasible [9].
Here, we emphasize that pre-trained original networks are
frozen throughout our process, and our method is applicable
to any pre-trained model because it is agnostic to the archi-
tecture and pre-training method of the original networks.
Our paper presents the following contributions:
• We present novel meta networks that help the frozen
original networks adapt to the target domain. This
architecture significantly minimize memory consump-
tion up to 86% by reducing the activation sizes of the
original networks.
• We propose a self-distilled regularization that controls
the output of meta networks by leveraging the output of
frozen original networks to preserve the source knowl-
edge and prevent error accumulation.
• We improve both memory efficiency and TTA perfor-
mance compared to existing state-of-the-art methods
on 1) image classification task ( e.g., CIFAR10/100-C
and ImageNet-C) and 2) semantic segmentation task
(e.g., Cityscapes with weather corruption)
11921
|
Tang_Intrinsic_Physical_Concepts_Discovery_With_Object-Centric_Predictive_Models_CVPR_2023 | Abstract
The ability to discover abstract physical concepts and
understand how they work in the world through observing
lies at the core of human intelligence. The acquisition of this
ability is based on compositionally perceiving the environ-
ment in terms of objects and relations in an unsupervised
manner. Recent approaches learn object-centric represen-
tations and capture visually observable concepts of objects,
e.g., shape, size, and location. In this paper, we take a step
forward and try to discover and represent intrinsic physical
concepts such as mass and charge. We introduce the PHYsi-
cal Concepts Inference NEtwork (PHYCINE), a system that
infers physical concepts in different abstract levels with-
out supervision. The key insights underlining PHYCINE
are two-fold, commonsense knowledge emerges with pre-
diction, and physical concepts of different abstract levels
should be reasoned in a bottom-up fashion. Empirical eval-
uation demonstrates that variables inferred by our system
work in accordance with the properties of the correspond-
ing physical concepts. We also show that object representa-
tions containing the discovered physical concepts variables
could help achieve better performance in causal reasoning
tasks, i.e., ComPhy.
| 1. Introduction
Why do objects bounce off after the collision? Why do
magnets attract or repel each other? Objects cover many
complex physical concepts which define how they interact
with the world [5]. Humans have the ability to discover
abstract concepts about how the world works through just
observation. In a preserved video, some concepts are obvi-
ous in the visual appearances of objects, like location, size,
velocity, etc, while some concepts are hidden in the behav-
iors of objects. For example, intrinsic physical concepts
*Corresponding author .⋆Equal contribution
PHYCINEshape,size..dynamicsmasschargeObject1Object2Object3…
Figure 1. Given a video from the ComPhy [5] dataset, PHYCINE
decomposes the scene into multi-object representations that con-
tain physical concepts of different abstraction levels, including vi-
sual attributes, dynamics, mass, and charge. (shown in different
colors).
like mass and charge, are unobservable from static scenes
and can only be discovered from the object dynamics. Ob-
jects carrying the same or opposite charge will exert a re-
pulsive or attractive force on each other. After a collision,
the lighter object will undergo larger changes in its motion
compared with the massive one. How can machines learn
to reveal and represent such common sense knowledge?
We believe the answer lies in two key steps: decomposing
the world into object-centric representations, and building
a predictive model with abstract physical concepts as latent
variables to handle uncertainty.
Object-centric representation learning [3,9,18,22,23,29]
aims at perceiving the world in a structured manner to im-
prove the generalization ability of intelligent systems and
achieve higher-level cognition. V AE [13]-based models,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23252
like IODINE [9] learn disentangled features that separate
interpretable object properties (e.g., shape, color, location)
in a common format. However, abstract concepts like mass
and charge, can not be distilled by generative models since
building object-level representations does not necessarily
model these higher-level-abstracted physics. CPL [5] suc-
cessfully learns high-level concepts with graph networks,
but it relies on supervision signals from the ground-truth
object-level concept labels. There are also several studies
investigating the effectiveness of object-centric representa-
tions in learning predictive models to solve predicting and
planning tasks [11, 14, 23, 24]. Nevertheless, to our knowl-
edge, there is no work yet trying to discover and represent
object-level intrinsic physical concepts in an unsupervised
manner.
The idea that abstract concepts can be learned through
prediction has been formulated in various ways in cognitive
science, neuroscience, and AI over several decades [19,20].
Intuitively, physics concepts such as velocity, mass, and
charge, may emerge gradually by training the system to
perform long-term predictions at the object representation
level [16]. Through predictions at increasingly long-time
scales, more and more complex concepts about how the
world works may be acquired in a bottom-up fashion. In
this paper, we focus on the main challenge: enabling the
model to represent and disentangle the unfolded concepts.
We follow common sense: with a neural physics engine,
if the prediction of an object trajectory fails, there must be
physical concepts that have not been captured. Therefore,
a latent variable that successfully models the uncertainty
of prediction defines a new physical concept, and a better
physical engine can be built. Following this idea, we cate-
gorize physical concepts into three levels of abstraction: ex-
trinsic concepts, dynamic concepts, and intrinsic concepts.
Firstly, the extrinsic properties (e.g., color, shape, material,
size, depth, location) can be referred to as object contexts
belonging to the lowest level of abstraction, and a percep-
tion module can directly encode the contexts. Secondly, the
dynamic properties (e.g., velocity) in the middle level are
hidden in the temporal and spatial relationships of visual
features and should be inferred from short-term prediction.
Thirdly, intrinsic properties like mass and charge can nei-
ther be directly observed nor inferred from short-term pre-
diction. They can only be inferred by analyzing the way
how objects exert force on each other. For example, infer-
ring mass needs the incorporation of a collision event, and
inferring charge needs to observe the change of object dy-
namics, which depends on a long-term observation or pre-
diction.
In this work, we build a system called PHYsical Con-
cepts Inference NEtwork (PHYCINE). In the system, there
are features arranged in a bottom-up pyramid that represents
physical concepts in different abstraction levels. These fea-tures cooperatively perform reconstruction and prediction
to interpret the observed world. Firstly, the object context
features reconstruct the observed image with a generative
model. Secondly, object dynamics features predict the next-
step object contexts by learning a state transition function.
Finally, the mass and charge features model the interaction
between objects. PHYCINE uses a relation module to cal-
culate pair-wise forces for all entities, and adaptively learn
the variables that represent object mass and charge with
proper regularization. During training, all representations
are randomly initialized and iteratively refined using gradi-
ent information about the evidence lower bound (ELBO)
obtained by the current estimate of the parameters. The
model can be trained using only raw input video data in
an end-to-end fashion. As shown in Figure 1, taking a raw
video as input, PHYCINE not only extracts extrinsic ob-
ject contexts (i.e., size, shape, color, location, material), but
also infers more abstract concepts (i.e., object dynamics,
mass, and charge). In our experiments, we demonstrate the
model’s ability to discover and disentangle physical con-
cepts, which can be used to solve downstream tasks. We
evaluate the learned representation on ComPhy, a causal
reasoning benchmark.
Our main contributions are as follows: (i) We challenge
a problem of physical concept discovery by observing raw
videos and successfully discovering intrinsic concepts of
velocity, mass, and charge. (ii) We introduce a framework
PHYCINE, a hierarchical object-centric predictive model
that infers physical concepts from low (e.g., color, shape)
to high (e.g., mass, charge) abstract levels, leading to dis-
entangled object-level physical representations. (iii) We
demonstrate the effectiveness of the representation learned
by PHYCINE on ComPhy.
|
Sun_MOSO_Decomposing_MOtion_Scene_and_Object_for_Video_Prediction_CVPR_2023 | Abstract
Motion, scene and object are three primary visual com-
ponents of a video. In particular, objects represent the fore-
ground, scenes represent the background, and motion traces
their dynamics. Based on this insight, we propose a two-
stage MOtion, Scene and Object decomposition framework
(MOSO)1for video prediction, consisting of MOSO-VQVAE
and MOSO-Transformer. In the first stage, MOSO-VQVAE
decomposes a previous video clip into the motion, scene and
object components, and represents them as distinct groups
of discrete tokens. Then, in the second stage, MOSO-
Transformer predicts the object and scene tokens of the sub-
sequent video clip based on the previous tokens and adds
dynamic motion at the token level to the generated object
and scene tokens. Our framework can be easily extended
to unconditional video generation and video frame inter-
polation tasks. Experimental results demonstrate that our
method achieves new state-of-the-art performance on five
challenging benchmarks for video prediction and uncondi-
tional video generation: BAIR, RoboNet, KTH, KITTI and
UCF101. In addition, MOSO can produce realistic videos
by combining objects and scenes from different videos.
| 1. Introduction
Video prediction aims to generate future video frames
based on a past video without any additional annotations
[6, 18], which is important for video perception systems,
such as autonomous driving [25], robotic navigation [16]
and decision making in daily life [5], etc. Considering
that video is a spatio-temporal record of moving objects, an
ideal solution of video prediction should depict visual con-
tent in the spatial domain accurately and predict motions in
the temporal domain reasonably. However, easily distorted
object identities and infinite possibilities of motion trajecto-
* Corresponding Author
1Codes have been released in https://github.com/iva-mzsun/MOSO
Scene Object MotionContent Motion
Intergrate Decompose(a) Traditional Method
(b) The Proposed Method
…
…
…
…
Figure 1. Rebuilding video signals based on (a) traditional decom-
posed content and motion signals or (b) our decomposed scene,
object and motion signals. Decomposing content and motion sig-
nals causes blurred and distorted appearance of the wrestling man,
while further separating objects from scenes resolves this issue.
ries make video prediction a challenging task.
Recently, several works [15, 44] propose to decompose
video signals into content and motion, with content encod-
ing the static parts, i.e., scene and object identities, and mo-
tion encoding the dynamic parts, i.e., visual changes. This
decomposition allows two specific encoders to be devel-
oped, one for storing static content signals and the other
for simulating dynamic motion signals. However, these
methods do not distinguish between foreground objects and
background scenes, which usually have distinct motion pat-
terns. Motions of scenes can be caused by camera move-
ments or environment changes, e.g., a breeze, whereas mo-
tions of objects such as jogging are always more local and
routine. When scenes and objects are treated as a unity,
their motion patterns cannot be handled in a distinct man-
ner, resulting in blurry and distorted visual appearances. As
depicted in Fig. 1, it is obvious that the moving subject
(i.e., the wrestling man) is more clear in the video obtained
by separating objects from scenes than that by treating them
as a single entity traditionally.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18727
Based on the above insight, we propose a two-stage MO-
tion, Scene and Object decomposition framework (MOSO)
for video prediction. We distinguish objects from scenes
and utilize motion signals to guide their integration. In the
first stage, MOSO-VQV AE is developed to learn motion,
scene and object decomposition encoding and video decod-
ing in a self-supervised manner. Each decomposed com-
ponent is equipped with an independent encoder to learn
its features and to produce a distinct group of discrete to-
kens. To deal with different motion patterns, we integrate
the object and scene features under the guidance of the cor-
responding motion feature. Then the video details can be
decoded and rebuilt from the merged features. In particular,
the decoding process is devised to be time-independent, so
that a decomposed component or a single video frame can
be decoded for flexible visualization.
In the second stage, MOSO-Transformer is proposed to
generate a subsequent video clip based on a previous video
clip. Motivated by the production of animation, which first
determines character identities and then portrays a series of
actions, MOSO-Transformer firstly predicts the object and
scene tokens of the subsequent video clip from those of the
previous video clip. Then the motion tokens of the subse-
quent video clip are generated based on the predicted scene
and object tokens and the motion tokens of the previous
video clip. The predicted object, scene, and motion tokens
can be decoded to the subsequent video clip using MOSO-
VQV AE. By modeling video prediction at the token level,
MOSO-Transformer is relieved from the burden of model-
ing millions of pixels and can instead focus on capturing
global context relationships. In addition, our framework can
be easily extended to other video generation tasks, includ-
ing unconditional video generation and video frame inter-
polation tasks, by simply revising the training or generation
pipelines of MOSO-Transformer.
Our contributions are summarized as follows:
•We propose a novel two-stage framework MOSO for
video prediction, which could decompose videos into mo-
tion, scene and object components and conduct video pre-
diction at the token level.
•MOSO-VQV AE is proposed to learn motion, scene
and object decomposition encoding and time-independently
video decoding in a self-supervised manner, which allows
video manipulation and flexible video decoding.
•MOSO-Transformer is proposed to first determine the
scene and object identities of subsequent video clips and
then predict subsequent motions at the token level.
•Qualitative and quantitative experiments on five chal-
lenging benchmarks of video prediction and unconditional
video generation demonstrate that our proposed method
achieves new state-of-the-art performance. |
Trosten_On_the_Effects_of_Self-Supervision_and_Contrastive_Alignment_in_Deep_CVPR_2023 | Abstract
Self-supervised learning is a central component in re-
cent approaches to deep multi-view clustering (MVC). How-
ever, we find large variations in the development of self-
supervision-based methods for deep MVC, potentially slow-
ing the progress of the field. To address this, we present Deep-
MVC, a unified framework for deep MVC that includes many
recent methods as instances. We leverage our framework to
make key observations about the effect of self-supervision,
and in particular, drawbacks of aligning representations
with contrastive learning. Further, we prove that contrastive
alignment can negatively influence cluster separability, and
that this effect becomes worse when the number of views in-
creases. Motivated by our findings, we develop several new
DeepMVC instances with new forms of self-supervision. We
conduct extensive experiments and find that (i) in line with
our theoretical findings, contrastive alignments decreases
performance on datasets with many views; (ii) all methods
benefit from some form of self-supervision; and (iii) our new
instances outperform previous methods on several datasets.
Based on our results, we suggest several promising direc-
tions for future research. To enhance the openness of the
field, we provide an open-source implementation of Deep-
MVC, including recent models and our new instances. Our
implementation includes a consistent evaluation protocol,
facilitating fair and accurate evaluation of methods and
components1.
| 1. Introduction
Multi-view clustering (MVC) generalizes the cluster-
ing task to data where the instances to be clustered are
observed through multiple views, or by multiple modali-
*UiT Machine Learning group ( machine-learning.uit.no ) and
Visual Intelligence Centre ( visual-intelligence.no ).
†Norwegian Computing Center ( nr.no ).
‡Department of Computer Science, University of Copenhagen.
§Pioneer Centre for AI ( aicentre.dk ).
1Code: https://github.com/DanielTrosten/DeepMVC
x(1)
i
x(2)
if(1)
f(2)MV-SSLSV-SSL 1
SV-SSL 2FusionClustering
modulez(1)
i
z(2)
iziFigure 1. Overview of the DeepMVC framework for a two-view
dataset. Different colors denote different components. The frame-
work is generalizable to an arbitrary number of views by adding
more view specific encoders ( f) and SV-SSL blocks.
ties. In recent years, deep learning architectures have seen
widespread adoption in MVC, resulting in the deep MVC
subfield. Methods developed within this subfield have shown
state-of-the-art clustering performance on several multi-view
datasets [ 14,19–21,29,33], largely outperforming tradi-
tional, non-deep-learning-based methods [33].
Despite these promising developments, we identify sig-
nificant drawbacks with the current state of the field. Self-
supervised learning (SSL) is a crucial component in many
recent methods for deep MVC [ 14,19–21,29,33]. However,
the large number of methods, all with unique components
and arguments about how they work, makes it challenging
to identify clear directions and trends in the development
of new components and methods. Methodological research
in deep MVC thus lacks foundation and consistent direc-
tions for future advancements. This effect is amplified by
large variations in implementation and evaluation of new
methods. Architectures, data preprocessing and data splits,
hyperparameter search strategies, evaluation metrics, and
model selection strategies all vary greatly across publica-
tions, making it difficult to properly compare methods from
different papers. To address these challenges, we present a
unified framework for deep MVC, coupled with a rigorous
and consistent evaluation protocol, and an open-source im-
plementation. Our main contributions are summarized as
follows:
(1) DeepMVC framework. Despite the variations in the
development of new methods, we recognize that the major-
ity of recent methods for deep MVC can be decomposed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23976
into the following fixed set of components: (i) view-specific
encoders; (ii) single-view SSL; (iii) multi-view SSL; (iv) fu-
sion; and (v) clustering module. The DeepMVC framework
(Figure 1) is obtained by organizing these components into a
unified deep MVC model. Methods from previous work can
thus be regarded as instances of DeepMVC.
(2) Theor |
Tang_Unifying_Vision_Text_and_Layout_for_Universal_Document_Processing_CVPR_2023 | Abstract
We propose Universal Document Processing (UDOP),
a foundation Document AI model which unifies text, im-
age, and layout modalities together with varied task for-
mats, including document understanding and generation.
UDOP leverages the spatial correlation between textual con-
tent and document image to model image, text, and layout
modalities with one uniform representation. With a novel
Vision-Text-Layout Transformer, UDOP unifies pretraining
and multi-domain downstream tasks into a prompt-based
sequence generation scheme. UDOP is pretrained on both
large-scale unlabeled document corpora using innovative
self-supervised objectives and diverse labeled data. UDOP
also learns to generate document images from text and lay-
out modalities via masked image reconstruction. To the
best of our knowledge, this is the first time in the field of
document AI that one model simultaneously achieves high-
quality neural document editing and content customization.
Our method sets the state-of-the-art on 8 Document AI tasks,
e.g., document understanding and QA, across diverse data
domains like finance reports, academic papers, and web-
sites. UDOP ranks first on the leaderboard of the Document
Understanding Benchmark.1
| 1. Introduction
Document Artificial Intelligence studies information ex-
traction, understanding, and analysis of digital documents,
e.g., business invoices, tax forms, academic papers, etc. It is
a multimodal task where text is structurally embedded in doc-
uments, together with other vision information like symbols,
figures, and style. Different from classic vision-language
research, document data have a 2D spatial layout: text con-
tent is structurally spread around in different locations based
on diverse document types and formats (e.g., invoices vs.
*Corresp. authors: [email protected], [email protected]
1Code and models: https://github.com/microsoft/i-
Code/tree/main/i-Code-Doctax forms); formatted data such as figures, tables and plots
are laid out across the document. Hence, effectively and
efficiently modeling and understanding the layout is vital
for document information extraction and content understand-
ing, for example, title/signature extraction, fraudulent check
detection, table processing, document classification, and
automatic data entry from documents.
Document AI has unique challenges that set it apart from
other vision-language domains. For instance, the cross-
modal interactions between text and visual modalities are
much stronger here than in regular vision-language data,
because the text modality is visually-situated in an image.
Moreover, downstream tasks are diverse in domains and
paradigms, e.g., document question answering [ 45], lay-
out detection [ 57], classification [ 13], information extrac-
tion [ 28], etc. This gives rises to two challenges: (1) how
to utilize the strong correlation between image, text and lay-
out modalities and unify them to model the document as a
whole? (2) how can the model efficiently and effectively
learn diverse vision, text, and layout tasks across different
domains?
There has been remarkable progress in Document AI in
recent years [ 1,10–12,15,16,24,26,29,30,36,37,48,52–55].
Most of these model paradigms are similar to traditional
vision-language frameworks: one line of work [ 1,11,29,30,
36,37,52–55] inherits vision-language models that encode
images with a vision network (e.g., vision transformer) and
feed the encodings to the multimodal encoder along with
text [ 17,27,44,47]; another line of work uses one joint en-
coder [ 22,46] for both text and image [ 16]. Some models
regard documents as text-only inputs [ 10,12,15,26,48]. In
these works, the layout modality is represented as shallow
positional embeddings, e.g., adding a 2D positional embed-
ding to text embeddings. The strong correlation between
modalities inherent in document data are not fully exploited.
Also to perform different tasks, many models have to use
task-specific heads, which is inefficient and requires manual
design for each task.
To address these challenges, we propose Universal Docu-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19254
Vision TaskText TaskLayout TaskMixed TaskShip Date to .....<100><350><118><372>...OCR TextBounding Boxes (x0, y0, x1, y2)Vision-Text-Layout Transformer
Ship Date ... (0.1, 0.65, 0.11, 0.68) (0.12, 0.65, 0.14, 0.68) (0.16, 0.65, 0.18, 0.68) ...Vision OutputsText OutputsLayout OutputsLayout modeling. <layout_0> ShipDate </layout_0> to Retail: Weekof March 14, 1994Visual text recognition. <text_0> <100><350><118><372></text_0> Week of March 14, 1994Text reconstruction withlayout. <text_layout_0> Retail: Week of March 14, 1994Question answering. What is the date? Layout analysis. Title<layout_0> <100><350><118><372><text_0> Ship Date<text_layout_0> ShipDate <0><10><2><20>
Week of March 14, 1994Title <20><50><40><80>
DecoderText-LayoutDecoderVision DecoderUnifiedEncoderMasked image reconstruction. Ship Date to Retail: Week ofMarch 14, 1994Figure 1. UDOP unifies vision, text, and layout through vision-text-layout Transformer and unified generative pretraining tasks including
vision task, text task, layout task, and mixed task. We show the task prompts (left) and task targets (right) for all self-supervised objectives
(joint text-layout reconstruction, visual text recognition, layout modeling, and masked autoencoding) and two example supervised objectives
(question answering and layout analysis).
ment Processing (UDOP), a foundation Document AI model
that unifies vision, text, and layout and different document
tasks. Different from regarding image and document text as
two separate inputs in previous works, in UDOP we propose
to model them with the uniform layout-induced representa-
tion (Sec. 3.1): in the input stage, we add embeddings of
text tokens with the features of the image patch where the
tokens are located. This simple and novel layout-induced
representation greatly enhances the interaction between the
text and vision modalities.
Besides the layout-induced representation, to form a uni-
form paradigm for different vision, text, layout tasks, UDOP
first builds a homogeneous vocabulary for texts and docu-
ment layout that converts layout, i.e. bounding boxes, to
discretized tokens. Second, we propose Vision-Text-Layout
(VTL) Transformer, consisting of a modality-agnostic en-
coder, text-layout decoder and vision decoder. VTL Trans-
former allows UDOP to jointly encode and decode vision,
text, and layout. UDOP unites all downstream tasks with a
sequence-to-sequence generation framework.
Besides the challenges of modality unification and task
paradigms, another issue is previous works utilized self-
supervised learning objectives that were originally designed
for single-modality learning, e.g., masked language model-
ing, or classical vision-language pretraining, e.g., contrastive
learning. We instead propose novel self-supervised learning
objectives designed to allow holistic document learning, in-
cluding layout modeling, text and layout reconstruction, and
vision recognition that account for text, vision and layout
modeling together (Sec. 4). Besides sequential generation,
UDOP can also generate vision documents by leveraging
masked autoencoders (MAE) [ 14] by reconstructing the doc-
ument image from text and layout modalities. With such
generation capacity, UDOP is the first document AI model
to achieve high-quality customizable, joint document editing
and generation.Finally, our uniform sequence-to-sequence generation
framework enables us to conveniently incorporate all major
document supervised learning tasks to pretraining, i.e., docu-
ment layout analysis, information extraction, document clas-
sification, document Q&A, and Table QA/NLI, despite their
significant differences in task and data format. In contrast,
pretraining in previous document AI works is constrained
to unlabeled data only (or using one single auxiliary super-
vised dataset such as FUNSD [ 55]), while abundant labeled
datasets with high quality supervision signals are ignored
due to the lack of modeling flexibility. Overall, UDOP is
pretrained on 11M public unlabeled documents, together
with 11 supervised datasets of 1.8M examples. Ablation
study in Table 4shows that UDOP only pretrained with the
proposed self-supervised objectives exhibits great improve-
ments over previous models, and adding the supervised data
to pretraining further improves the performance.
We evaluate UDOP on FUNSD [ 18], CORD [ 34], RVL-
CDIP [ 13], DocVQA [ 33], and DUE-Benchmark [ 2]. UDOP
ranks the 1st place on the DUE-Benchmark leaderboard with
7 tasks, and also achieves SOTA on CORD, hence making
UDOP a powerful and unified foundation Document AI
model for diverse document understanding tasks,
To summarize, our major contributions include:
1. Unified representations and modeling for vision, text
and layout modalities in document AI.
2. Unified all document tasks to the sequence-to-sequence
generation framework.
3. Combined novel self-supervised objectives with super-
vised datasets in pretraining for unified document pretrain-
ing.
4. UDOP can process and generate text, vision, and layout
modalities together, which to the best of our knowledge is
first one in the field of document AI.
5. UDOP is a foundation model for Document AI, achiev-
ing SOTA on 8 tasks with significant margins.
19255
|
Tao_Boosting_Transductive_Few-Shot_Fine-Tuning_With_Margin-Based_Uncertainty_Weighting_and_Probability_CVPR_2023 | Abstract
Few-Shot Learning (FSL) has been rapidly developed
in recent years, potentially eliminating the requirement forsignificant data acquisition. Few-shot fine-tuning has beendemonstrated to be practically efficient and helpful, espe-cially for out-of-distribution datum [ 7,13,17,29]. In this
work, we first observe that the few-shot fine-tuned meth-ods are learned with the imbalanced class marginal distri-bution, leading to imbalanced per-class testing accuracy.This observation further motivates us to propose the Trans-ductive Fine-tuning with Margin-based uncertainty weight-ing and Probability regularization (TF-MP), which learnsa more balanced class marginal distribution as shown inFig. 1. We first conduct sample weighting on unlabeled
testing data with margin-based uncertainty scores and fur-ther regularize each test sample’s categorical probability.TF-MP achieves state-of-the-art performance on in- / out-of-distribution evaluations of Meta-Dataset [ 31] and sur-
passes previous transductive methods by a large margin.
| 1. Introduction
Deep learning has gained vital progress in various ar-
chitecture designs, optimization techniques, data augmenta-tion, and learning strategies, demonstrating its great poten-tial to be applied to real-world scenarios. However, applica-tions with deep learning generally require a large amount oflabeled data, which is time-consuming to collect and costly
on manual labeling force. Few-Shot Learning (FSL), learn-
ing with only a few training samples, becomes increasinglyessential [ 5,9,10,27,31,33] to alleviate the dependence on
data acquisition significantly.
The recent attention on FSL over out-of-distribution da-
tum [ 31] poses a challenge in obtaining efficient algorithms
that can perform well on cross-domain situations. Fine-tuning a pre-trained feature extractor with a few samples[5,7,13,17,29] recently demonstrates its prominent poten-
tial to solve this challenge. However, as illustrated in [ 29],73 74 75 76
Per-class Accuracy5101520Largest Diff. between Per-class Pred.DCMSS TF
URLTSA
TF-MP
Figure 1. We observe that fine-tuned models with current state-of-
the-art methods [ 7,17,18,29] learned an imbalanced class marginal
distribution. In the empirical experiments, a uniform testing set isutilized, and the Largest Difference LDbetween per-class predic-
tions is used to quantify whether the learned class marginal proba-bility is balanced. Data are from sub-datasets in Meta-Dataset [ 31]
with 100 episodes for each dataset and 10 per-class testing sam-ples. With current methods, LDis over 10. TF-MP successfully
reduces LDby around 5 points and achieves the best per-class ac-
curacy.
a few training samples would lead to a biased estimation of
the true data distribution. The biased learning during few-shot fine-tuning could further mislead the model to learnan imbalanced class marginal distribution. To verify this,we quantify the largest difference (LD) between the num-
ber of per-class predictions with a uniform testing set. Ifthe fine-tuned model learns a balanced class marginal dis-tribution, with a uniform testing set LDshould approach
zero. However, the empirical results show the opposite an-swer. As shown in Fig. 1, even with state-of-the-art meth-
ods [ 7,17,18,29],LDcould be largely over 10 in practice.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15752
/g1/g7/g9/g20/g12/g3/g17/g11/g23/g7 /g19/g10/g7 /g11/g13/g4/g3/g12/g3/g14/g5/g7/g6 /g16/g17/g15/g4/g3/g4/g11/g12/g11/g19/g22
/g15/g8 /g19/g7/g18/g19/g11/g14/g9 /g6/g3/g19/g3/g3/g18/g17/g8/g7/g8/g14/g15/g14/g20/g22 /g4/g11/g12/g21/g15/g7/g18/g14/g23/g7/g20/g14/g17/g16
/g7 /g1/g7/g9/g20/g12/g3/g17/g11/g23/g7/g19/g10/g7/g11/g13/g4/g3/g12/g3/g14/g5/g7/g6/g16/g17/g15/g4/g3/g4/g11/g12/g11/g19/g22
/g15/g8/g15/g8/g19/g7/g18/g19/g11/g14/g9/g19/g7/g18/g19/g11/g14/g9 /g6/g3/g19/g3/g6/g3/g18/g17/g8/g7/g8/g14/g15/g14/g20/g22/g4/g11/g12/g21/g15/g7/g18/g14/g23/g7/g20/g14/g17/g16/g2/g7/g18/g12/g14/g16/g1/g8/g7/g19/g11/g10 /g5/g16/g9/g11/g18/g20/g7/g14/g16/g20/g22
/g6/g11/g14/g12/g13/g20/g14/g16/g12/g2/g7/g18/g12/g14/g16/g1/g8/g7/g19/g11/g10/g5/g16/g9/g11/g18/g20/g7 /g14/g16/g20/g22/g22
/g6/g11/g14/g12/g13/g20/g14/g16/g12
/g1/g5/g8/g6/g7/g12/g8/g10/g6 /g13/g10/g9 /g2/g3/g5/g9/g5/g4
/g12/g5/g11/g12/g8/g10/g6 /g4 /g2/g12/g2/g3/g1/g7/g5/g6/g8 /g3/g2/g1/g9/g4/g10
/g6/g8/g10/g7 /g5/g2 /g1/g3/g4/g2/g10/g7 /g20/g19/g11/g12/g11/g23/g3/g19/g11/g15/g14 /g15/g8 /g21/g17/g15/g14/g9
/g16/g17/g7/g6/g11/g5/g19/g11/g15/g14/g18 /g11/g18 /g12/g3/g17/g9/g7/g12/g22/g5/g15/g13/g16/g17/g7/g18/g18/g7/g6 /g3/g1/g7/g5/g6/g8 /g3/g2/g1/g9/g4/g10
/g6/g8/g10/g7/g9 /g11/g10/g5 /g2/g1/g3/g4
Figure 2. Illustration of TF-MP. We empirically evaluate a 1-shot 10-way classification on the correct/predicted number of per-class
predictions. The model without TF-MP presents a severely imbalanced categorical performance even with the same number of per-classtraining samples. This motivates our methodology: (1) By proposing Margin-based uncertainty, the loss of each unlabeled testing data
is weighted during finetuning, compressing the utilization of wrongly predicted testing data. (2) We explicitly regularize the categorical
probability for each testing data to pursue balanced class-wise learning during finetuning. Using TF-MP, the difference between per-classpredictions reduces from 21.3%to14.4%with per-class accuracy improved from 4.5%to4.9%. Results are averaged over 100 episodes
in Meta-Dataset [ 31].
The observation in Fig. 1demonstrates that the fine-
tuned models in FSL suffer from severely imbalanced cat-egorical performance. In other words, the learned classmarginal distribution of few-shot fine-tuned models islargely imbalanced and biased. We argue that solving thisissue is critical to maintaining the algorithm’s robustness todifferent testing scenarios. Classes with fewer predictionswould carry low accuracy, and this issue of fine-tuned mod-els could yield a fatal failure for testing scenarios in favorof these classes.
In this work, we revisit Transductive Fine-tuning [ 7]
by effectively using unlabelled testing data. Based on theaforementioned analysis, the imbalanced categorical perfor-mance in FSL motivates us to propose two solutions: (1)the per-sample loss weighting through Margin-based un-certainty and (2) the probability regularization. For (1),as shown in Fig. 2, using the same number of per-class
training data achieves extremely imbalanced prediction re-sults. It indicates that each sample contributes to the fi-nal performance differently, which inspires us to weighthe unlabeled testing samples according to their uncertaintyscores. Specifically, we address the importance of utiliz-ing margin [ 26] in entropy computation and demonstrate
its supreme ability to compress the utilization of wrongpredictions. For (2), as the ideal performance should becategorically balanced, we propose to explicitly regularizethe probability for each testing data. Precisely, each test-ing sample’s categorical probability is adjusted by a scalevector, which quantifies the difference between the classmarginal distribution and the uniform. The class marginaldistribution is estimated by combining each query sample
with the complete support set . Our proposed TransductiveFine-tuning with Margin-based uncertainty and Probability
regularization (TF-MP) effectively reduces the largest dif-ference between per-class predictions by around 5 samplesand further improves per-class accuracy with 2.1%, shown
in Fig. 1. Meanwhile, TF-MP shows robust cross-domain
performance boosts on Meta-Dataset, demonstrating its po-tential in real applications. Our contributions can be sum-marized as follows:
• We present the observation that: with current state-of-
the-art methods [ 7,17,18,29], the few-shot fine-tuned
models are learned with the imbalanced class marginaldistribution, which in other words presents imbalancedper-class accuracy. We highlight the importance ofsolving it to improve the model’s robustness under dif-ferent testing scenarios.
• Inspired by the observation, we revisit Transductive
Fine-tuning and propose TF-MP: (1) Utilizing Margin-based uncertainty to weigh each unlabeled testing datain the loss objective in order to compress the utiliza-
tion of possibly wrong predictions. (2) Regularizingthe categorical probability for each testing sample topursue a more balanced class marginal during finetun-
ing.
• We empirically verify that models with TF-MP learn a
more balanced class marginal distribution as shown inFig.1. Furthermore, we conduct comprehensive exper-
iments to show that TF-MP obtains consistent perfor-mance boosts on Meta-Dataset [ 31], demonstrating its
efficiency and effectiveness for practical applications.
15753
|
Tian_Modeling_the_Distributional_Uncertainty_for_Salient_Object_Detection_Models_CVPR_2023 | Abstract
Most of the existing salient object detection (SOD) mod-
els focus on improving the overall model performance, with-
out explicitly explaining the discrepancy between the train-
ing and testing distributions. In this paper, we investigate
a particular type of epistemic uncertainty, namely distribu-
tional uncertainty, for salient object detection. Specifically,
for the first time, we explore the existing class-aware dis-
tribution gap exploration techniques, i.e. long-tail learning,
single-model uncertainty modeling and test-time strategies,
and adapt them to model the distributional uncertainty for
our class-agnostic task. We define test sample that is dissim-
ilar to the training dataset as being “out-of-distribution”
(OOD) samples. Different from the conventional OOD def-
inition, where OOD samples are those not belonging to the
closed-world training categories, OOD samples for SOD
are those break the basic priors of saliency, i.e. center prior,
color contrast prior, compactness prior and etc., indicat-
ing OOD as being “continuous” instead of being discrete
for our task. We’ve carried out extensive experimental re-
sults to verify effectiveness of existing distribution gap mod-
eling techniques for SOD, and conclude that both train-time
single-model uncertainty estimation techniques and weight-
regularization solutions that preventing model activation
from drifting too much are promising directions for mod-
eling distributional uncertainty for SOD.
| 1. Introduction
Saliency detection (or salient object detection, SOD) [6,
11, 14, 40, 44, 64, 65, 67–69, 74–76, 78] aims to localize
object(s) that attract human attention. Most of the exist-
ing techniques focus on improving model performance on
benchmark testing dataset without explicitly explaining the
distribution gap issue within this task. In this paper, we
aim to explore the distributional uncertainty for better un-
derstanding of the trained saliency detection models.
Jing Zhang ([email protected]) and Yuchao Dai
([email protected]) are the corresponding authors.
Our code is publicly available at: https://npucvr.github.io/
Distributional_uncer/ .
Figure 1. Visualization of different types of uncertainty, where
aleatoric uncertainty ( p(y|x∗, θ)) is caused by the inherent ran-
domness of the data, model uncertainty ( p(θ|D)) happens when
there exists low-density region, leading to multiple solutions
within this region, and distributional uncertainty ( p(x∗|D)) oc-
curs when the test sample x∗fails to fit in the model based on the
training dataset D.
Background: Suppose the training dataset is D=
{xi, yi}N
i=1of size Nis sampled from a joint data distri-
bution p(x, y), where iindexes the samples and is omit-
ted when it’s clear. The conventional classifier is trained to
maximize the conditional log-likelihood logpθ(y|x), where
θrepresents model parameters. When deploying the trained
model in real-world, its performance depends on whether
the test sample x∗is from the same joint data distribution
p(x, y). Forx∗fromp(x, y)(indicating x∗is in-distribution
sample), its performance is impressive. However, when
it’s from a different distribution other than p(x, y)(i.e.x∗
is out-of-distribution sample), the resulting p(y|x∗)often
yield incorrect predictions with high confidence. The main
reason is that p(y|x∗)does not fit a probability distribu-
tion over the whole joint data space. To fix the above is-
sue, deep hybrid models (DHMs) [5,18,46,73] can be used
to model the joint distribution: p(x, y) =p(y|x)p(x). Al-
though the trained model may still inaccurately assign high
confidence p(y|x∗)for out-of-distribution sample x∗, effec-
tive marginal density modeling of p(x∗)can produce low
density for it, leading to reliable p(x∗, y).
Given a testing sample x∗, with a deep hybrid model,
[5] proposes to factorize the posterior joint distribution
p(x∗, y|θ, D)via:
p(x∗, y|θ, D) =p(y|x∗, θ)|{z}
datap(x∗|D)|{z}
distributionalp(θ|D)|{z}
model, (1)
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19660
Training
OOD Testing
Figure 2. “OOD” samples for salient object detection. Different
from the class-aware tasks, OOD for saliency detection is continu-
ous, which can be defined as attributes that break the basic saliency
priors, i.e. center prior, contrast prior, compactne |
Tan_Backdoor_Attacks_Against_Deep_Image_Compression_via_Adaptive_Frequency_Trigger_CVPR_2023 | Abstract
Recent deep-learning-based compression methods have
achieved superior performance compared with traditional
approaches. However, deep learning models have proven to
be vulnerable to backdoor attacks, where some specific trig-
ger patterns added to the input can lead to malicious behav-
ior of the models. In this paper, we present a novel backdoor
attack with multiple triggers against learned image com-
pression models. Motivated by the widely used discrete co-
sine transform (DCT) in existing compression systems and
standards, we propose a frequency-based trigger injection
model that adds triggers in the DCT domain. In particu-
lar, we design several attack objectives for various attack-
ing scenarios, including: 1) attacking compression quality
in terms of bit-rate and reconstruction quality; 2) attacking
task-driven measures, such as down-stream face recogni-
tion and semantic segmentation. Moreover, a novel simple
dynamic loss is designed to balance the influence of differ-
ent loss terms adaptively, which helps achieve more efficient
training. Extensive experiments show that with our trained
trigger injection models and simple modification of encoder
parameters (of the compression model), the proposed attack
can successfully inject several backdoors with correspond-
ing triggers in a single image compression model.
| 1. Introduction
Image compression is a fundamental task in the area
of signal processing, and has been used in many applica-
tions to store image data efficiently without much degrading
the quality. Traditional image compression methods such
as JPEG [46], JPEG2000 [26], Better Portable Graphics
(BPG) [43], and recent Versatile Video Coding (VVC) [39]
rely on hand-crafted modules for transforms and entropy
coding to improve coding efficiency. With the rapid devel-
opment of deep-learning techniques, various learning-based
approaches [2, 7, 20, 36] adopt end-to-end trainable models
*Corresponding author.
0.392bpp, 28.63dB
9.354bpp, 28.39dB0.334bpp, 4.24dB
Original Image
Poisoned Image (BPP attack)
Poisoned Image (PSNR attack)
Compression ModelTrigger Injection Model!"!#$$$Figure 1. Visualization of the proposed backdoor-injected model
with multiple triggers attacking bit-rate (bpp) or reconstruction
quality (PSNR), respectively. The second sample shows the result
of the BPP attack with a huge increase in bit-rate, and the third one
presents a PSNR attack with severely corrupted output.
that integrate the pipeline of prediction, transform, and en-
tropy coding jointly to achieve improved performance.
Together with the impressive performance of the deep
neural networks, many concerns have been raised about
their related AI security issues [23,54]. Primarily due to the
lack of transparency in deep neural networks, it is observed
that a variety of attacks can compromise the deployment
and reliability of AI systems [24, 25, 53] in computer vi-
sion, natural language processing, speech recognition, etc.
Among all these attacks, backdoor attacks have recently at-
tracted lots of attention. As most SOTA models require ex-
tensive computation resources and a lengthy training pro-
cess, it is more practical and economical to download and
directly adopt a third-party model with pretrained weights,
which might face the threat from a malicious backdoor.
In general, a backdoor-injected model works as expected
on normal inputs, while a specific trigger added to the clean
input can activate the malicious behavior, e.g.,incorrect pre-
diction. Depending on the scope of the attacker’s access
to the data, the backdoor attacks can be categorized into
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12250
poisoning-based and non-poisoning-based attacks [32]. In
the scenario of poisoning-based attack [5,14], attackers can
only manipulate the dataset by inserting poisoned data. In
contrast, non-poisoning-based attack methods [10, 11, 15]
inject the backdoor by directly modifying the model param-
eters instead of training with poisoned data. As image com-
pression methods take the original input as a ground truth
label, it is hard to perform a poisoning-based backdoor at-
tack. Therefore, our work investigates a backdoor attack by
modifying the parameter of only the encoder in a compres-
sion model.
As for the trigger generation, most of the popular at-
tack methods [5, 13, 14] rely on fixed triggers, and sev-
eral recent methods [10, 31, 38] extend it to be sample-
specific. While most previous papers focus on high-level vi-
sion tasks ( e.g., image classification and semantic segmen-
tation), triggers in those works are added in only the spa-
tial domain and may not perform well in low-level vision
tasks such as image compression. Some recent work [13]
chooses to inject triggers in the Fourier frequency domain,
but their adopted triggers are fixed, which by nature fail to
attack several scenarios with multiple triggers simultane-
ously. Motivated by the widely used discrete cosine trans-
form (DCT) in existing compression systems and standards,
we propose a frequency-based trigger injection in the DCT
domain to generate the poisoned images. Extensive ex-
periments show that backdoor attacks also threaten deep-
learning compression models and can cause much degra-
dation once the attacking triggers are applied. As shown
in Fig. 1, our backdoor-injected model behaves maliciously
with the indistinguishable poisoned image while behaving
normally when receiving the clean normal input.
To the best of our knowledge, backdoor attacks have
been largely neglected in low-level computer vision re-
search. In this paper, we make the first endeavor to inves-
tigate backdoor attacks against learned image compression
models. Our main contributions are summarized below.
• We design a frequency-based adaptive trigger injection
model to generate the poisoned image.
• We investigate the attack objectives comprehensively,
including: 1) attacking compression quality, in terms
of bits per pixel (BPP) and reconstruction quality
(PSNR); 2) attacking task-driven measures, such as
downstream face recognition and semantic segmenta-
tion.
• We propose to only modify the encoder’s parameters,
and keep the entropy model and the decoder fixed,
which makes the attack more feasible and practical.
• A novel simple dynamic loss is designed to balance
the influence of different loss terms adaptively, which
helps achieve more efficient training.• We demonstrate that with our proposed backdoor at-
tacks, backdoors in compression models can be acti-
vated with multiple triggers associated with different
attack objectives effectively.
|
Thavamani_Learning_To_Zoom_and_Unzoom_CVPR_2023 | Abstract
Many perception systems in mobile computing, au-
tonomous navigation, and AR/VR face strict compute con-
straints that are particularly challenging for high-resolution
input images. Previous works propose nonuniform downsam-
plers that "learn to zoom" on salient image regions, reducing
compute while retaining task-relevant image information.
However, for tasks with spatial labels (such as 2D/3D ob-
ject detection and semantic segmentation), such distortions
may harm performance. In this work (LZU), we "learn to
zoom" in on the input image, compute spatial features, and
then "unzoom" to revert any deformations. To enable ef-
ficient and differentiable unzooming, we approximate the
zooming warp with a piecewise bilinear mapping that is
invertible. LZU can be applied to any task with 2D spa-
tial input and any model with 2D spatial features, and we
demonstrate this versatility by evaluating on a variety of
tasks and datasets: object detection on Argoverse-HD, se-
mantic segmentation on Cityscapes, and monocular 3D ob-
ject detection on nuScenes. Interestingly, we observe boosts
in performance even when high-resolution sensor data is
unavailable, implying that LZU can be used to "learn to up-
sample" as well. Code and additional visuals are available
athttps://tchittesh.github.io/lzu/ .
| 1. Introduction
In many applications, the performance of perception sys-
tems is bottlenecked by strict inference-time constraints.
This can be due to limited compute (as in mobile computing),
a need for strong real-time performance (as in autonomous
vehicles), or both (as in augmented/virtual reality). These
constraints are particularly crippling for settings with high-
resolution sensor data. Even with optimizations like model
compression [ 4] and quantization [ 23], it is common practice
to downsample inputs during inference.
However, running inference at a lower resolution undeni-
ably destroys information. While some information loss is
†Now at Waymo.
‡Now at Nvidia.
Figure 1. LZU is characterized by "zooming" the input image,
computing spatial features, then "unzooming" to revert spatial de-
formations. LZU can be applied to any task and model that makes
use of internal 2D features to process 2D inputs. We show visual
examples of output tasks including 2D detection, semantic segmen-
tation, and 3D detection from RGB images.
unavoidable, the usual solution of uniform downsampling
assumes that each pixel is equally informative towards the
task at hand. To rectify this assumption, Recasens et al.[20]
propose Learning to Zoom (LZ), a nonuniform downsampler
that samples more densely at salient (task-relevant) image
regions. They demonstrate superior performance relative
to uniform downsampling on human gaze estimation and
fine-grained image classification. However, this formulation
warps the input image and thus requires labels to be invariant
to such deformations.
Adapting LZ downsampling to tasks with spatial labels
is trickier, but has been accomplished in followup works
for semantic segmentation (LDS [ 11]) and 2D object de-
tection (FOVEA [ 22]). LDS [ 11] does not unzoom during
learning, and so defines losses in the warped space. This
necessitates additional regularization that may not apply to
non-pixel-dense tasks like detection. FOVEA [ 22]does un-
zoom bounding boxes for 2D detection, but uses a special
purpose solution that avoids computing an inverse, making
it inapplicable to pixel-dense tasks like semantic segmenta-
tion. Despite these otherwise elegant solutions, there doesn’t
seem to be a general task-agnostic solution for intelligent
downsampling.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5086
Our primary contribution is a general framework in which
we zoom in on an input image, process the zoomed im-
age, and then unzoom the output back with an inverse warp.
Learning to Zoom and Unzoom (LZU) can be applied to
anynetwork that uses 2D spatial features to process 2D spa-
tial inputs (Figure 1)with no adjustments to the network
or loss . To unzoom, we approximate the zooming warp
with a piecewise bilinear mapping. This allows efficient and
differentiable computation of the forward and inverse warps.
To demonstrate the generality of LZU, we demonstrate
performance a variety of tasks: object detection with Reti-
naNet [ 17] on Argoverse-HD [ 14],semantic segmentation
with PSPNet [ 29] on Cityscapes [ 7], and monocular 3D
detection with FCOS3D [ 26] on nuScenes [ 2]. In our experi-
ments, to maintain favorable accuracy-latency tradeoffs, we
use cheap sources of saliency (as in [ 22]) when determining
where to zoom. On each task, LZU increases performance
over uniform downsampling and prior works with minimal
additional latency.
Interestingly, for both 2D and 3D object detection, we
also see performance boosts even when processing low reso-
lution input data. While prior works focus on performance
improvements via intelligent downsampling [ 20,22], our
results show that LZU can also improve performance by
intelligently upsampling (suggesting that current networks
struggle to remain scale invariant for small objects, a well-
known observation in the detection community [ 18]).
|
Tang_Neuro-Modulated_Hebbian_Learning_for_Fully_Test-Time_Adaptation_CVPR_2023 | Abstract
Fully test-time adaptation aims to adapt the network
model based on sequential analysis of input samples dur-
ing the inference stage to address the cross-domain per-
formance degradation problem of deep neural networks.
We take inspiration from the biological plausibility learn-
ing where the neuron responses are tuned based on a lo-
cal synapse-change procedure and activated by competi-
tive lateral inhibition rules. Based on these feed-forward
learning rules, we design a soft Hebbian learning process
which provides an unsupervised and effective mechanism
for online adaptation. We observe that the performance
of this feed-forward Hebbian learning for fully test-time
adaptation can be significantly improved by incorporating
a feedback neuro-modulation layer. It is able to fine-tune
the neuron responses based on the external feedback gener-
ated by the error back-propagation from the top inference
layers. This leads to our proposed neuro-modulated Heb-
bian learning (NHL) method for fully test-time adaptation.
With the unsupervised feed-forward soft Hebbian learning
being combined with a learned neuro-modulator to capture
feedback from external responses, the source model can be
effectively adapted during the testing process. Experimen-
tal results on benchmark datasets demonstrate that our pro-
posed method can significantly improve the adaptation per-
formance of network models and outperforms existing state-
of-the-art methods.
| 1. Introduction
Although deep neural networks have achieved great
success in various machine learning tasks, their perfor-
mance tends to degrade significantly when there is data
shift [27, 55] between the training data in the source do-
*Corresponding authors.main and the testing data in the target domain [40]. To ad-
dress the performance degradation problem, unsupervised
domain adaptation (UDA) [16,38,50] has been proposed to
fine-tune the model parameters with a large amount of un-
labeled testing data in an unsupervised manner. Source-free
UDA methods [33, 35, 67] aim to adapt the network model
without the need to access the source-domain samples.
There are two major categories of source-free UDA
methods. The first category needs to access the whole test
dataset on the target domain to achieve their adaptation per-
formance [35, 67]. Notice that, in many practical scenarios
when we deploy the network model on client devices, the
network model does not have access to the whole dataset in
the target domain since collecting and constructing the test
dataset on the client side is very costly. The second type of
method, called fully test-time adaptation, only needs access
to live streams of test samples [41, 64, 66], which is able to
dynamically adapt the source model on the fly during the
testing process. Existing methods for fully test-time adap-
tation mainly focus on constructing various loss functions
to regulate the inference process and adapt the model based
on error back-propagation. For example, the TENT method
[66] updates the batch normalization module by minimizing
an entropy loss. The TTT method [64] updates the feature
extractor parameters according to a self-supervised loss on
a proxy learning task. The TTT++ method [37] introduces a
feature alignment strategy based on online moment match-
ing.
1.1. Challenges in Fully Test-Time UDA
We recognize that most of the domain variations, such
as changes in the visual scenes and image transformations
or corruptions, are early layers of features in the semantic
hierarchy [66]. They can be effectively captured and mod-
eled by lower layers of the network model. From the per-
spective of machine learning, early representations through
the lower layer play an important role to capture the pos-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3728
terior distribution of the underlying explanatory factors for
the observed input [1]. For instance, in deep neural network
models, the early layers of the network tend to respond to
corners, edges, or colors. In contrast, deeper layers respond
to more class-specific features [72]. In the corruption test-
time adaptation scenario, the class-specific features are al-
ways the same because the testing datasets are the corrup-
tion of the training domain. However, the early layers of
models can be failed due to corruption.
Therefore, the central challenge in fully test-time UDA
lies in how to learn useful early layer representations of the
test samples without supervision. Motivated by this obser-
vation, we propose to explore neurobiology-inspired Heb-
bian learning for effective early-layer representation learn-
ing and fully test-time adaptation. It has been recognized
that the learning rule of supervised end-to-end deep neural
network training using back-propagation and the learning
rules of the early front-end neural processing in neurobiol-
ogy are unrelated [28]. The responses of neurons in bio-
logical neural networks are tuned by local pre-synaptic and
post-synaptic activity, along with global variables that mea-
sure task performance, rather than the specific activity of
other neurons [69].
Source
Domain
Target
Domain
Source
Model
Hebbian
learning
Oracle
Figure 1. The feature map visualization after the first convolution
layer obtained by different learning methods.
1.2. Hebbian Learning
Hebbian learning aims to learn useful early layer rep-
resentations without supervision based on local synaptic
plasticity rules, which is able to generate early representa-
tions that are as good as those learned by end-to-end super-
vised training with back-propagation [28, 52]. Drastically
different from the current error back-propagation methods
which require pseudo-labels or loss functions from the top
network layers, Hebbian learning is a pure feed-forward
adaptation process and does not require feedback from the
distant top network layers. The responses of neurons are
tuned based on a local synapse-change procedure and ac-
tivated by competitive lateral inhibition rules [28]. Dur-
ing the learning process, the strength of synapses under-
goes local changes that are proportional to the activity ofthe pre-synaptic cell and dependent on the activity of the
post-synaptic cell. It also introduces local lateral inhibition
between neurons within a layer, where the synapses of hid-
den units with strong responses are pushed toward the pat-
terns that drive them, while those with weaker responses are
pushed away from these patterns.
Existing literature has shown that early representa-
tions learned by Hebbian learning are as well as back-
propagation and even more robust in testing [28, 29, 52].
Figure 1 shows the feature maps learned by different meth-
ods. The first row shows the original image. The second
row shows the images in the target domain with significant
image corruption. The third row shows the feature maps
learned by the network model trained in the source domain
for these target-domain images. The fourth row shows the
feature maps learned by our Hebbian learning method. The
last row (“oracle”) shows the feature maps learned with true
labels. We can see that the unsupervised Hebbian learning
is able to generate feature maps which are as good as those
from supervised learning.
1.3. Our Major Idea
In this work, we observe that Hebbian learning, although
provides a new and effective approach for unsupervised
learning of early layer representation of the image, when
directly applied to the network model, is not able to achieve
satisfactory performance in fully test-time adaptation. First,
the original hard decision for competitive learning is not
suitable for fully test-time adaptation. Second, the Hebbian
learning does not have an effective mechanism to consider
external feedback, especially the feedback from the top net-
work layers. We observe that, biologically, the visual pro-
cessing is realized through hierarchical models considering
a bottom-up early representation learning for the sensory
input, and a top-down feedback mechanism based on pre-
dictive coding [15, 56].
Motivated by this, in this work, we propose to develop
a new approach, called neuro-modulated Hebbian learn-
ing (NHL) , for fully test-time adaptation. We first incor-
porate a soft decision rule into the feed-forward Hebbian
learning to improve its competitive learning. Second, we
learn a neuro-modulator to capture feedback from exter-
nal responses, which controls which type of feature is con-
solidated and further processed to minimize the predictive
error. During inference, the source model is adapted by
the proposed NHL rule for each mini-batch of testing sam-
ples during the inference process. Experimental results on
benchmark datasets demonstrate that our proposed method
can significantly improve the adaptation performance of
network models and outperforms existing state-of-the-art
methods.
3729
1.4. Summary of Major Contributions
To summarize, our major contributions include: (1)
we identify that the major challenge in fully test-time
adaptation lies in effective unsupervised learning of early
layer representations, and explore neurobiology-inspired
soft Hebbian learning for effective early layer representa-
tion learning and fully test-time adaptation. (2) We develop
a new neuro-modulated Hebbian learning method which
combines unsupervised feed-forward Hebbian learning of
early layer representation with a learned neuro-modulator
to capture feedback from external responses. We analyze
the optimal property of the proposed NHL algorithm based
on free-energy principles [14, 15]. (3) We evaluate our pro-
posed NHL method on benchmark datasets for fully test-
time adaptation, demonstrating its significant performance
improvement over existing methods.
|
Tian_Integrally_Pre-Trained_Transformer_Pyramid_Networks_CVPR_2023 | Abstract
In this paper, we present an integral pre-training frame-
work based on masked image modeling (MIM). We advo-
cate for pre-training the backbone and neck jointly so that
the transfer gap between MIM and downstream recogni-
tion tasks is minimal. We make two technical contributions.
First , we unify the reconstruction and recognition necks
by inserting a feature pyramid into the pre-training stage.
Second , we complement mask image modeling (MIM) with
masked feature modeling (MFM) that offers multi-stage su-
pervision to the feature pyramid. The pre-trained mod-
els, termed integrally pre-trained transformer pyramid net-
works (iTPNs), serve as powerful foundation models for vi-
sual recognition. In particular, the base/large-level iTPN
achieves an 86.2% /87.8% top-1 accuracy on ImageNet-1K,
a53.2% /55.6% box AP on COCO object detection with 1×
training schedule using Mask-RCNN, and a 54.7% /57.7%
mIoU on ADE20K semantic segmentation using UPerHead
– all these results set new records. Our work inspires
the community to work on unifying upstream pre-training
and downstream fine-tuning tasks. Code is available at
github.com/sunsmarterjie/iTPN.
| 1. Introduction
Recent years have witnessed two major progresses in vi-
sual recognition, namely, the vision transformer architec-
ture [22] as network backbone and masked image mod-
eling (MIM) [3, 28, 68] for visual pre-training. Combin-
ing these two techniques yields a generalized pipeline that
achieves state-of-the-arts in a wide range of visual recogni-
tion tasks, including image classification, object detection,
and instance/semantic segmentation.
One of the key issues of the above pipeline is the transfer
gap between upstream pre-training and downstream fine-
*Corresponding author.
86.0
85.0
84.0
83.0MAEMAEMaskFeat
iBoTSimMIMOurs
iBoTdata2vecOursfine−tuningaccuracy %
pre−training epochs87.0
86.5
86.0
85.5MAE
MaskFeatiBoT
SimMIMOurs
iBoTdata2vecOurssupervised by
image pixels
CAE (DALLE)PeCo (CodeBook )
MVP (CLIP)86.0
85.0
84.0
83.0Ours (CLIP)Ours (CLIP)
MVP (CLIP)88.0
87.0
86.0
85.0Ours/14 (CLIP)supervised by a
pre-trained teacherOurs/16 (CLIP)
CAE (DALLE)base size models large size models
pre−training epochs400 800 1600 400 800 1600BEiT (DALLE)CAE (DALLE)
BEiT (DALLE)PeCo (CodeBook )BEiT -v2(CLIP)BEiT -v2(CLIP)
BEiT -v2(CLIP)BEiT -v2(CLIP)base -level IN-1K COCO (MR, 1×)ADE20K (UP)
models cls. acc. det. AP seg. AP seg. mIoU
iTPN-B 86.2 53.2 46.6 54.7
prev. best 85.5 [50] 50.0 [13] 44.0 [13] 53.0 [50]
large -level IN-1K COCO (MR, 1×)ADE20K (UP)
models cls. acc. det. AP seg. AP seg. mIoU
iTPN-L 87.8 55.6 48.6 57.7
prev. best 87.3 [50] 54.5 [13] 47.6 [13] 56.7 [50]
Figure 1. Top: on ImageNet-1K classification, iTPN shows sig-
nificant advantages over prior methods, either only using pixel su-
pervision (top) or leveraging knowledge from a pre-trained teacher
(bottom, in the parentheses lies the name of teacher model). Bot-
tom: iTPN surpasses previous best results in terms of recognition
accuracy (%) on several important benchmarks. Legends – IN-1K:
ImageNet-1K, MR: Mask R-CNN [30], UP: UPerHead [66].
tuning. From this point of view, we argue that downstream
visual recognition, especially fine-scaled recognition ( e.g.,
detection and segmentation), requires hierarchical visual
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18610
features. However, most existing pre-training tasks ( e.g.,
BEiT [3] and MAE [28]) were built upon plain vision trans-
formers. Even if hierarchical vision transformers have been
used ( e.g., in SimMIM [68], ConvMAE [25], and Green-
MIM [33]), the pre-training task only affects the backbone
but leaves the neck ( e.g., a feature pyramid) un-trained. This
brings extra risks to downstream fine-tuning as the opti-
mization starts with a randomly initialized neck which is
not guaranteed to cooperate with the pre-trained backbone.
In this paper, we present an integral pre-training frame-
work to alleviate the risk. We establish the baseline with
HiViT [74], an MIM-friendly hierarchical vision trans-
former, and equip it with a feature pyramid. To jointly op-
timize the backbone (HiViT) and neck (feature pyramid),
we make two-fold technical contributions. First , we unify
the upstream and downstream necks by inserting a feature
pyramid into the pre-training stage (for reconstruction) and
reusing the weights in the fine-tuning stage (for recogni-
tion). Second , to better pre-train the feature pyramid, we
propose a new masked feature modeling (MFM) task that
(i) computes intermediate targets by feeding the original
image into a moving-averaged backbone, and (ii) uses the
output of each pyramid stage to reconstruct the intermedi-
ate targets. MFM is complementary to MIM and improves
the accuracy of both reconstruction and recognition. MFM
can also be adapted to absorb knowledge from a pre-trained
teacher ( e.g., CLIP [52]) towards better performance.
The obtained models are named integrally pre-trained
pyramid transformer networks (iTPNs). We evaluate them
on standard visual recognition benchmarks. As highlighted
in Figure 1, the iTPN series report the best known down-
stream recognition accuracy. On COCO and ADE20K ,
iTPN largely benefits from the pre-trained feature pyra-
mid. For example, the base /large -level iTPN reports a
53.2%/55.6%box AP on COCO ( 1×schedule, Mask
R-CNN) and a 54.7%/57.7%mIoU on ADE20K (UPer-
Net), surpassing all existing methods by large margins. On
ImageNet-1K , iTPN also shows significant advantages, im-
plying that the backbone itself becomes stronger during the
joint optimization with neck. For example, the base /large -
level iTPN reports an 86.2%/87.8%top-1 classification
accuracy, beating the previous best record by 0.7%/0.5%,
which is not small as it seems in such a fierce competition.
In diagnostic experiments, we show that iTPN enjoys both
(i) a lower reconstruction error in MIM pre-training and (ii)
a faster convergence speed in downstream fine-tuning – this
validates that shrinking the transfer gap benefits both up-
stream and downstream parts.
Overall, the key contribution of this paper lies in the inte-
gral pre-training framework that, beyond setting new state-
of-the-arts, enlightens an important future research direc-
tion – unifying upstream pre-training and downstream fine-
tuning to shrink the transfer gap between them. |
Tschannen_CLIPPO_Image-and-Language_Understanding_From_Pixels_Only_CVPR_2023 | Abstract
Multimodal models are becoming increasingly effective,
in part due to unified components, such as the Transformer
architecture. However, multimodal models still often consist
of many task- and modality-specific pieces and training pro-
cedures. For example, CLIP (Radford et al., 2021) trains in-
dependent text and image towers via a contrastive loss. We
explore an additional unification: the use of a pure pixel-
based model to perform image, text, and multimodal tasks.
Our model is trained with contrastive loss alone, so we call
it CLIP-Pixels Only (CLIPPO). CLIPPO uses a single en-
coder that processes both regular images and text rendered
as images. CLIPPO performs image-based tasks such as re-
trieval and zero-shot image classification almost as well as
CLIP-style models, with half the number of parameters and
no text-specific tower or embedding. When trained jointly
via image-text contrastive learning and next-sentence con-
trastive learning, CLIPPO can perform well on natural
language understanding tasks, without any word-level loss
(language modelling or masked language modelling), out-
performing pixel-based prior work. Surprisingly, CLIPPO
can obtain good accuracy in visual question answering,
simply by rendering the question and image together. Fi-
nally, we exploit the fact that CLIPPO does not require a
tokenizer to show that it can achieve strong performance on
multilingual multimodal retrieval without modifications.
| 1. Introduction
In recent years, large-scale multimodal training of
Transformer-based models has led to improvements in the
state-of-the-art in different domains including vision [ 2,10,
74–76], language [ 6,11], and audio [ 5]. In particular, in
computer vision and image-language understanding, a sin-
gle large pretrained model can outperform task-specific ex-
pert models [ 10,74,75]. However, large multimodal mod-
els often use modality or dataset-specific encoders and de-
coders, and accordingly lead to involved protocols. For
example, such models frequently involve training different
Code and pretrained models are available as part of bigvision [4]
https://github.com/google-research/big_vision .
ContrastiveVision TransformerText TransformerTOK
A birthday pug wearing a party hat.• CLIP •
CONVWORD EMB
• CLIPPO •
A birthday pug wearing a party hatCONVContrastiveTransformerFigure 1. CLIP [ 56] trains separate image and text encoders,
each with a modality-specific preprocessing and embedding, on
image/alt-text pairs with a contrastive objective. CLIPPO trains a
pure pixel-based model with equivalent capabilities by rendering
the alt-text as an image, encoding the resulting image pair using
a shared vision encoder (in two separate forward passes), and ap-
plying same training objective as CLIP.
parts of the model in separate phases on their respective
datasets, with dataset-specific preprocessing, or transferring
different parts in a task-specific manner [ 75]. Such modality
and task-specific components can lead to additional engi-
neering complexity, and poses challenges when introducing
new pretraining losses or downstream tasks. Developing
a single end-to-end model that can process any modality,
or combination of modalities, would be a valuable step for
multimodal learning. Here, we focus on images and text.
A number of key unifications have accelerated the
progress of multimodal learning. First, the Transformer
architecture has been shown to work as a universal back-
bone, performing well on text [ 6,15], vision [ 16], au-
dio [ 5,24,54], and other domains [ 7,34]. Second, many
papers have explored mapping different modalities into a
single shared embedding space to simplify the input/output
interface [ 21,22,46,69], or develop a single interface to
many tasks [ 31,37]. Third, alternative representations of
modalities allow harnessing in one domain neural archi-
tectures or training procedures designed for another do-
main [ 28,49,54,60]. For example, [ 60] and [ 28,54] repre-
sent text and audio, respectively, by rendering these modal-
ities as images (via a spectogram in the case of audio).
In this paper, we explore the use of a pure pixel-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11006
model for multimodal learning of text and images. Our
model is a single Vision Transformer [ 16] that processes
visual input, or text, or both together, all rendered as RGB
images. The same model parameters are used for all modal-
ities, including low-level feature processing; that is, there
are no modality-specific initial convolutions, tokenization
algorithms, or input embedding tables. We train our model
using only a single task: contrastive learning, as popular-
ized by CLIP [ 56] and ALIGN [ 32]. We therefore call our
model CLIP-Pixels Only (CLIPPO).
We find that CLIPPO performs similarly to CLIP-style
models (within 1-2%) on the main tasks CLIP was de-
signed for—image classification and text/image retrieval—
despite not having modality-specific towers. Surpris-
ingly, CLIPPO can perform complex language understand-
ing tasks to a decent level without any left-to-right lan-
guage modelling, masked language modelling, or explicit
word-level losses. In particular, on the GLUE benchmark
[73] CLIPPO outperforms classic NLP baselines, such as
ELMO+BiLSTM+attention, outperforms prior pixel-based
masked language models [ 60], and approaches the score
of BERT [ 15]. Interestingly, CLIPPO obtains good perfor-
mance on VQA when simply rendering the image and text
together, despite never having been pretrained on such data.
Pixel-based models have an immediate advantage over
regular language models because they do not require pre-
determining the vocabulary/tokenizer and navigating the
corresponding intricate trade-offs; consequently, we ob-
serve improved performance on multilingual retrieval com-
pared to an equivalent model that uses a classical tokenizer.
|
Sun_RefTeacher_A_Strong_Baseline_for_Semi-Supervised_Referring_Expression_Comprehension_CVPR_2023 | Abstract
Referring expression comprehension (REC) often re-
quires a large number of instance-level annotations for fully
supervised learning, which are laborious and expensive. In
this paper, we present the first attempt of semi-supervised
learning for REC and propose a strong baseline method
called RefTeacher. Inspired by the recent progress in com-
puter vision, RefTeacher adopts a teacher-student learning
paradigm, where the teacher REC network predicts pseudo-
labels for optimizing the student one. This paradigm allows
REC models to exploit massive unlabeled data based on a
small fraction of labeled. In particular, we also identify
two key challenges in semi-supervised REC, namely, sparse
supervision signals and worse pseudo-label noise. To ad-
dress these issues, we equip RefTeacher with two novel de-
signs called Attention-based Imitation Learning (AIL) and
Adaptive Pseudo-label Weighting (APW). AIL can help the
student network imitate the recognition behaviors of the
teacher, thereby obtaining sufficient supervision signals.
APW can help the model adaptively adjust the contributions
of pseudo-labels with varying qualities, thus avoiding con-
firmation bias. To validate RefTeacher, we conduct exten-
sive experiments on three REC benchmark datasets. Exper-
imental results show that RefTeacher obtains obvious gains
over the fully supervised methods. More importantly, using
only 10% labeled data, our approach allows the model to
achieve near 100% fully supervised performance, e.g., only
-2.78% on RefCOCO. Project: https://refteacher.github.io/.
| 1. Introduction
Referring Expression Comprehension (REC) [33–35,45,
49, 53, 60, 62], also called Visual grounding [26, 51, 52]
orPhrase localization [20, 39], aims to locate the target
*Corresponding Author
Figure 1. Statistics of the pseudo-label quality in semi-
supervised REC with different percentages of labeled infor-
mation. The REC model only predicts one pseudo-box for each
image-text pair and cannot apply filtering, so the pseudo-labels are
usually noisy and low-quality during training.
objects in an image referred by a given natural language
expression. Compared to conventional object detection
tasks [4,9–11,23,24,28,40–42], REC is not limited to a fix
set of categories and can be generalized to open-vocabulary
recognition [31, 60]. However, as a detection task, REC
also requires a large number of instance-level annotations
for training, which poses a huge obstacle to its practical ap-
plications.
To address this issue, one feasible solution is semi-
supervised learning (SSL), which has been well studied on
various computer vision tasks [1–3, 8, 16, 29, 43, 44, 47, 59]
but not yet exploited in REC. In particular, recent advances
in semi-supervised object detection (SSOD) [16, 29, 44, 47,
59] has yielded notable progress in practical applications.
These SSOD methods apply a training framework consisted
of two detection networks with the same configurations, act-
ing as teacher and student, respectively. The teacher net-
work is in charge of generating pseudo-labels to optimize
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19144
the student during training, which can exploit massive unla-
beled data based on a small amount of labeled information.
With the help of this effective training paradigm, the latest
SSOD method [37] can even achieve fully supervised per-
formance with only 40% and 25% labeled information on
MSCOCO [25] and PASCAL VOC [7], respectively.
However, directly transferring this successful paradigm
to REC still suffers from two main challenges due to task
gaps. The first one is the extremely sparse supervision
signals. In contrast to object detection, REC only ground
one instance for each text-image pair. This prediction pat-
tern makes the REC model receive much fewer pseudo-
supervisions than SSOD during teacher-student learning,
i.e., only one bounding box without class pseudo-labels.
For instance, Compared with SSOD that has 6-15 high-
quality pseudo-boxes for each image [30, 37], semi-REC
only has 0.5 box on average at the infant training stages as
shown in Fig. 1. The sparse supervision signals also lead to
worse pseudo-label quality, which is the second challenge.
SSOD methods [16,29,44,47,59] can apply NMS [10] and
high-threshold filtering to discard the vast majority of noisy
pseudo-labels, thereby avoiding the error accumulation is-
sue [29, 37, 44] in SSL. But in REC, a strong filtering is not
feasible due to the already sparse pseudo-label information.
This results in that most pseudo-labels of REC are of much
lower-quality.
Based on this observations, we propose the first semi-
supervised approach for REC called RefTeacher with two
novel designs, namely Attention-based Imitation Learn-
ing(AIL) and Adaptive Pseudo-label Weighting (APW). In
principle, RefTeacher also adopts a teacher-student frame-
work, where the teacher predicts the pseudo bounding boxes
for the student according to the given expressions. Follow-
ing the latest SSOD [29,30,37,47], we also use EMA to up-
date the gradients of the teacher network from the student,
and introduce data augmentation and burn-in strategies to
improve SSL. To enrich the supervision signals, the pro-
posed AIL helps the student imitate the attention behaviors
of the teacher, thereby improving the knowledge transfer-
ring. APW is further used to reduce the impact of noisy
pseudo-labels, which is achieved via adaptively weighting
label information and the corresponding gradient updates.
To validate RefTeacher, we apply RefTeacher to
two representative REC models, i.e. RealGIN [60] and
TransVG [5], and conduct extensive experiments on three
REC benchmark datasets, namely RefCOCO [54], Ref-
COCO+ [54] and RefCOCOg [38]. Experimental results
show that RefTeacher can greatly exceed the supervised
baselines, e.g. +18.8% gains on 10% RefCOCO. More im-
portantly, using only 10% labeled data, RefTeacher can help
RealGIN achieve near 100% fully supervised performance.
Overall, the contributions of this paper are three-fold:
• We present the first attempt of semi-supervised learn-ing for REC with a strong baseline method called
RefTeacher.
• We identify two challenges of semi-supervised REC,
i.e. sparse supervision signals and worse pseudo-label
noise, and address them with two novel designs,
namely Attention-based Imitation Learning (AIL) and
Adaptive Pseudo-label Weighting (APW).
• RefTeacher achieves significant performance gains on
RefCOCO, RefCOCO+, and RefCOCOg datasets over
the fully supervised methods.
|
Trosten_Hubs_and_Hyperspheres_Reducing_Hubness_and_Improving_Transductive_Few-Shot_Learning_CVPR_2023 | Abstract
Distance-based classification is frequently used in trans-
ductive few-shot learning (FSL). However, due to the high-
dimensionality of image representations, FSL classifiers are
prone to suffer from the hubness problem, where a few points
(hubs) occur frequently in multiple nearest neighbour lists
of other points. Hubness negatively impacts distance-based
classification when hubs from one class appear often among
the nearest neighbors of points from another class, degrading
the classifier’s performance. To address the hubness prob-
lem in FSL, we first prove that hubness can be eliminated by
distributing representations uniformly on the hypersphere.
We then propose two new approaches to embed representa-
tions on the hypersphere, which we prove optimize a tradeoff
between uniformity and local similarity preservation – reduc-
ing hubness while retaining class structure. Our experiments
show that the proposed methods reduce hubness, and signifi-
cantly improves transductive FSL accuracy for a wide range
of classifiers1.
| 1. Introduction
While supervised deep learning has made a significant
impact in areas where large amounts of labeled data are
available [ 6,11], few-shot learning (FSL) has emerged as
a promising alternative when labeled data is limited [ 3,12,
14,16,21,26,28,31,33,39,40]. FSL aims to design
classifiers that can discriminate between novel classes based
on a few labeled instances, significantly reducing the cost of
the labeling procedure.
In transductive FSL, one assumes access to the entire
*Equal contributions.
†UiT Machine Learning group ( machine-learning.uit.no ) and
Visual Intelligence Centre ( visual-intelligence.no ).
‡Norwegian Computing Center.
§Department of Computer Science, University of Copenhagen.
¶Pioneer Centre for AI ( aicentre.dk ).
1Code available at https://github.com/uitml/noHub .
0.25 0.50 0.75 1.00 1.25 1.50
Hubness55606570Accuracy
NoneL2 (ArXiv'19)ZN (ICCV'21)
TCPR (NeurIPS'22)EASE (CVPR'22)noHub (Ours)noHub-S (Ours)Figure 1. Few-shot accuracy increases when hubness decreases.
The figure shows the 1-shot accuracy when classifying different
embeddings with SimpleShot [33] on mini-ImageNet [29].
query set during evaluation. This allows transductive FSL
classifiers to learn representations from a larger number of
samples, resulting in better performing classifiers. However,
many of these methods base their predictions on distances
to prototypes for the novel classes [ 3,16,21,28,39,40].
This makes these methods susceptible to the hubness prob-
lem [ 10,22,24,25], where certain exemplar points (hubs)
appear among the nearest neighbours of many other points.
If a support sample is a hub, many query samples will be
assigned to it regardless of their true label, resulting in low
accuracy. If more training data is available, this effect can
be reduced by increasing the number of labeled samples in
the classification rule – but this is impossible in FSL.
Several approaches have recently been proposed to embed
samples in a space where the FSL classifier’s performance
is improved [ 4,5,7,17,33,35,39]. However, only one of
these directly addresses the hubness problem. Fei et al. [7]
show that embedding representations on a hypersphere with
zero mean reduces hubness. They advocate the use of Z-
score normalization (ZN) along the feature axis of each
representation, and show empirically that ZN can reduce
hubness in FSL. However, ZN does not guarantee a data
mean of zero, meaning that hubness can still occur after ZN.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7527
In this paper we propose a principled approach to em-
bed representations in FSL, which both reduces hubness
and improves classification performance. First, we prove
that hubness can be eliminated by embedding representa-
tions uniformly on the hypersphere. However, distributing
representations uniformly on the hypersphere without any
additional constraints will likely break the class structure
which is present in the representation space – hurting the
performance of the downstream classifier. Thus, in order
to both reduce hubness and preserve the class structure in
the representation space, we propose two new embedding
methods for FSL. Our methods, U niformHyperspherical
Structure-preserving Em beddings (noHub) and noHub with
Support labels (noHub-S), leverage a decomposition of the
Kullback-Leibler divergence between representation and em-
bedding similarities, to optimize a tradeoff between Local
Similarity Preservation (LSP) and uniformity on the hyper-
sphere. The latter method, noHub-S, also leverages label
information from the support samples to further increase the
class separability in the embedding space.
Figure 1 illustrates the correspondence between hubness
and accuracy in FSL. Our methods have both the least hub-
ness andhighest accuracy among several recent embedding
techniques for FSL.
Our contributions are summarized as follows.
•We prove that the uniform distribution on the hyper-
sphere has zero hubness and that embedding points uni-
formly on the hypersphere thus alleviates the hubness
problem in distance-based classification for transduc-
tive FSL.
•We propose noHub and noHub-S to embed representa-
tions on the hypersphere, and prove that these methods
optimize a tradeoff between LSP and uniformity. The
resulting embeddings are therefore approximately uni-
form, while simultaneously preserving the class struc-
ture in the embedding space.
•Extensive experimental results demonstrate that noHub
and noHub-S outperform current state-of-the-art em-
bedding approaches, boosting the performance of a
wide range of transductive FSL classifiers, for multiple
datasets and feature extractors.
|
Tan_Hierarchical_Semantic_Correspondence_Networks_for_Video_Paragraph_Grounding_CVPR_2023 | Abstract
Video Paragraph Grounding (VPG) is an essential yet
challenging task in vision-language understanding, which
aims to jointly localize multiple events from an untrimmed
video with a paragraph query description. One of the crit-
ical challenges in addressing this problem is to compre-
hend the complex semantic relations between visual and
textual modalities. Previous methods focus on modeling
the contextual information between the video and text from
a single-level perspective (i.e., the sentence level), ignor-
ing rich visual-textual correspondence relations at different
semantic levels, e.g., the video-word and video-paragraph
correspondence. To this end, we propose a novel Hierar-
chical Semantic Correspondence Network (HSCNet), which
explores multi-level visual-textual correspondence by learn-
ing hierarchical semantic alignment and utilizes dense su-
pervision by grounding diverse levels of queries. Specifi-
cally, we develop a hierarchical encoder that encodes the
multi-modal inputs into semantics-aligned representations
at different levels. To exploit the hierarchical semantic cor-
respondence learned in the encoder for multi-level supervi-
sion, we further design a hierarchical decoder that progres-
sively performs finer grounding for lower-level queries con-
ditioned on higher-level semantics. Extensive experiments
demonstrate the effectiveness of HSCNet and our method
significantly outstrips the state-of-the-arts on two challeng-
ing benchmarks, i.e., ActivityNet-Captions and TACoS.
| 1. Introduction
As a fundamental problem that bridges the gap between
computer vision and natural language processing, Video
Language Grounding (VLG) aiming to localize the video
*Corresponding author
(a) Bottom -Up Hierarchy of
Linguistic Descriptions(b) Coarse- to-Fine Hierarchy of
Temporal Counterparts
Then hegets back upand grabs …again.
grabsThis baby …falls down .Then hegets back upand
grabs …again .Next he…hitting itreally far.
Words Sentences ParagraphFigure 1. (a) The bottom-up hierarchy for linguistic semantics is
composed of textual words, sentences, and the paragraph. (b) The
coarse-to-fine hierarchy for temporal granularity consists of visual
counterparts for each level of linguistic query.
segments corresponding to given natural language queries,
has been drawing increasing attention from the community
in these years. Early works in the field of VLG mainly
focused on addressing Video Sentence Grounding (VSG)
[1, 8], whose goal is to localize the most relevant moment
with a single sentence query. Recently, Video Paragraph
Grounding (VPG) is introduced in [2]. It requires to jointly
localize multiple events via a paragraph query consisting of
several temporally ordered sentences. Rather than ground-
ing each event independently, VPG needs to further exploit
the contextual information between the video and the tex-
tual paragraph, which helps to avoid ambiguity and achieve
more precise temporal localization of video events.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18973
Previous VPG works [2,12,26] commonly explore corre-
lations of events by modeling the video-text correspondence
from a single semantic level ( i.e., the sentence level). How-
ever, they neglect the rich visual-textual correspondence at
other semantic levels, such as the word level and paragraph
level, which can also provide some useful information for
grounding events in the video. Considering the grounding
of “The man stops and hits the ball far away”, the seman-
tic relations between video content and the word “hits” is
crucial in determining the end time of the event. Besides,
when we consider the paragraph as a whole, then ground-
ing the holistic paragraph in the video first is beneficial to
suppress the irrelevant events or backgrounds, which eases
the further grounding of sentences.
To be more general, we observe that there naturally exist
two perspectives of hierarchical semantic structures in tack-
ling VPG, which is intuitively illustrated in Figure 1. On
the language side, Figure 1 (a) shows that the semantics of
paragraph query can be divided into an inherent three-level
hierarchy consisting of words, sentences, and the holistic
paragraph in a bottom-up organization. On the video side,
Figure 1 (b) shows that the temporal counterparts of dif-
ferent levels of queries also form a three-level granular-
ity hierarchy with temporally nested dependencies from the
top down. By relating the video content to different lev-
els of query semantics for multi-level query grounding, the
model is enforced to capture more complex relations be-
tween events by reasoning about their interconnections at
different semantic granularities, and exploit richer temporal
clues to facilitate the grounding of events in the video.
Motivated by the above observations, we propose a novel
framework termed as Hierarchical Semantic Correspon-
dence Network (HSCNet) for VPG. Our HSCNet is de-
signed as a multi-level encoder-decoder architecture in or-
der to leverage hierarchical semantic information from the
two perspectives. On the one hand , we learn the hierar-
chical visual-textual semantic correspondence by gradually
aligning the visual and textual semantics into different lev-
els of common spaces from the bottom up. Concretely, we
construct a hierarchical multi-modal encoder on top of the
linguistic semantic hierarchy. It comprises three semantic
levels of visual-textual encoders. Each encoder receives the
semantic representation from a lower level and continues
to establish visual-textual correspondence at a higher level
via iterative multi-modal interactions. On the other hand ,
we utilize richer cross-level contexts and denser supervi-
sion by progressively grounding multiple levels of queries
from coarse to fine. Specifically, we construct a hierarchical
progressive decoder on top of the temporal granularity hier-
archy, which also comprises three levels of decoders. The
lower-level queries are grounded by finer temporal bound-
aries conditioned on contextual knowledge from higher-
level queries, which eases the learning of multi-level lo-calization that provides diverse temporal clues to promote
fine-grained video paragraph grounding.
We evaluate the proposed HSCNet on two challenging
benchmarks, i.e., ActivityNet-Captions and TACoS. Ex-
tensive ablation studies validate the effectiveness of the
method. Our contributions can be summarized as follows:
• We investigate and propose a novel hierarchical model-
ing framework for Video Paragraph Grounding (VPG).
To the best of our knowledge, it’s the first time in
the problem of VPG that hierarchical visual-textual se-
mantic correspondence is explored and multiple levels
of linguistic queries can be grounded.
• We design a novel encoder-decoder architecture to
learn multi-level visual-textual correspondence by hi-
erarchical semantic alignment and progressively per-
form finer grounding for lower-level queries.
• Experiments demonstrate that our proposed HSCNet
achieves new state-of-the-art results on the challeng-
ing ActivityNet-Captions and TACoS benchmarks, re-
markably surpassing the previous approaches.
|
Tien_Revisiting_Reverse_Distillation_for_Anomaly_Detection_CVPR_2023 | Abstract
Anomaly detection is an important application in large-
scale industrial manufacturing. Recent methods for this
task have demonstrated excellent accuracy but come with
a latency trade-off. Memory based approaches with domi-
nant performances like PatchCore or Coupled-hypersphere-
based Feature Adaptation (CFA) require an external mem-
ory bank, which significantly lengthens the execution time.
Another approach that employs Reversed Distillation (RD)
can perform well while maintaining low latency. In this
paper, we revisit this idea to improve its performance, es-
tablishing a new state-of-the-art benchmark on the chal-
lenging MVTec dataset for both anomaly detection and
localization. The proposed method, called RD++ , runs
six times faster than PatchCore, and two times faster
than CFA but introduces a negligible latency compared
to RD. We also experiment on the BTAD and Retinal
OCT datasets to demonstrate our method’s generalizabil-
ity and conduct important ablation experiments to provide
insights into its configurations. Source code will be avail-
able at https://github.com/tientrandinh/
Revisiting-Reverse-Distillation .
| 1. Introduction
Detecting anomalies is a crucial aspect of computer vi-
sion with numerous applications, such as product quality
control [4], and healthcare monitor system [21]. Unsuper-
vised anomaly detection can help to reduce the cost of col-
lecting abnormal samples. This task identifies and local-
izes anomalous regions in images without defect annota-
tions during training. Instead, a set of abnormal-free sam-
ples is utilized. Early approaches rely on generative adver-
sarial models to extract meaningful latent representations
on normal samples [25, 29, 31]. However, these approaches
are computationally expensive, resulting in higher latency
and potential performance limitations on unseen data. Other
approaches leverage the pre-trained Convolutional Neural
Networks (CNNs) [19] backbones to extract comprehensive
visual features for anomaly detection systems [3, 23].
Figure 1. Comparisons of different anomaly detection methods in
terms of AUROC sample (vertical axis), inference time (horizontal
axis), and memory footprint (circle radius). Our RD++ achieves
thehighest AUROC sample metric for anomaly detection while
being 6×faster than PatchCore, 4×faster than CSFlow, and 2×
faster than CFA. Additionally, RD++ requires only 4GB of mem-
ory for inference, making it one of the least memory-use methods.
The test environment was conducted on a computer with Intel(R)
Xeon(R) 2.00GHz (4 cores) and Tesla T4 GPU (15GB VRAM).
Alternative approach employs knowledge distillation
(KD) [14] based frameworks. For example, Salehi et al.
[30] set up a teacher-student network pair, and knowledge
is transferred from teacher to student. During training, the
student is learned from only normal samples. Thus, it is
expected to learn the distribution of normal samples, sub-
sequently generating the out-of-distribution representations
when inference with anomalous query [30]. However, Deng
and Li [8] point out that the statement is not always accu-
rate due to limitations in the similarity of network architec-
tures and the same data flows in the teacher-student model.
To overcome these limitations, they propose a reverse flow,
called Reverse Distillation (RD), in which the teacher out-
put is fed to the student through a One-class Bottleneck
module (OCBE). The reverse distillation approach achieves
competitive performance and also maintains low latency.
Recent research in anomaly detection, such as Patch-
Core [27], and CFA [20], have achieved state-of-the-art per-
formance in detecting and localizing anomalies. However,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24511
these methods are based on the memory bank framework,
which leads to significant latency and makes them challeng-
ing to apply in practical scenarios. Our key question: How
can we develop a method that achieves high accuracy and
fast inference for real-world applications?
This paper answers the question from the perspective of
reverse distillation. We identify the limitations of the RD
approach by examining feature compactness and anoma-
lous signal suppression. We argue that relying solely on
the distillation task and an OCBE module is insufficient for
providing a compact representation to the student. Further-
more, we do not observe an explicit mechanism to discard
anomalous patterns using the OCBE block as the authors
claim. To address these concerns, we incorporate RD with
multi-task learning to propose RD++, which demonstrates
a favorable gain in performance.
The contributions of the proposed RD++ are highlighted
as follows:
• We propose RD++ to tackle two tasks. First, fea-
ture compactness task: by presenting a self-supervised
optimal transport method. Second, anomalous sig-
nal suppression task: by simulating pseudo-abnormal
samples with simplex noise and minimizing the recon-
struction loss.
• We conduct extensive experiments on several pub-
lic datasets in different domains, including MVTec,
BTAD, and Retinal OCT. Results show that our ap-
proach achieves state-of-the-art performance on detec-
tion and localization, demonstrating strong general-
ization capabilities across domains. Furthermore, our
method’s real-time capability is at least twice as fast as
its latest counterparts (see Fig. 1), making it a promis-
ing method for practical applications.
|
Tripathi_3D_Human_Pose_Estimation_via_Intuitive_Physics_CVPR_2023 | Abstract
Estimating 3D humans from images often produces im-
plausible bodies that lean, float, or penetrate the floor. Such
methods ignore the fact that bodies are typically supported
by the scene. A physics engine can be used to enforce phys-
ical plausibility, but these are not differentiable, rely on
unrealistic proxy bodies, and are difficult to integrate into ex-
isting optimization and learning frameworks. In contrast, we
exploit novel intuitive-physics (IP) terms that can be inferred
from a 3D SMPL body interacting with the scene. Inspired
by biomechanics, we infer the pressure heatmap on the body,
theCenter of Pressure (CoP) from the heatmap, and the
SMPL body’s Center of Mass (CoM ). With these, we develop
IPMAN , to estimate a 3D body from a color image in a “sta-
ble” configuration by encouraging plausible floor contact
and overlapping CoP andCoM . Our IP terms are intuitive,
easy to implement, fast to compute, differentiable, and can be
integrated into existing optimization and regression methods.
We evaluate IPMAN on standard datasets and MoYo, a new
dataset with synchronized multi-view images, ground-truth
3D bodies with complex poses, body-floor contact, CoM and
pressure. IPMAN produces more plausible results than the
state of the art, improving accuracy for static poses, while
not hurting dynamic ones. Code and data are available for
research at https://ipman.is.tue.mpg.de .
*This work was mostly performed at MPI-IS. | 1. Introduction
To understand humans and their actions, computers need
automatic methods to reconstruct the body in 3D. Typi-
cally, the problem entails estimating the 3D human pose
and shape ( HPS) from one or more color images. State-of-
the-art ( SOTA ) methods [46, 51, 75, 102] have made rapid
progress, estimating 3D humans that align well with image
features in the camera view. Unfortunately, the camera view
can be deceiving. When viewed from other directions, or
when placed in a 3D scene, the estimated bodies are often
physically implausible: they lean, hover, or penetrate the
ground (see Fig. 1 top). This is because most SOTA methods
reason about humans in isolation ; they ignore that people
move in a scene, interact with it, and receive physical sup-
port by contacting it. This is a deal-breaker for inherently
3D applications, such as biomechanics, augmented/virtual
reality (AR/VR) and the “metaverse”; these need humans to
be reconstructed faithfully and physically plausibly with re-
spect to the scene. For this, we need a method that estimates
the 3D human on a ground plane from a color image in a
configuration that is physically “stable” .
This is naturally related to reasoning about physics and
support. There exist many physics simulators [10,30,60] for
games, movies, or industrial simulations, and using these for
plausible HPS estimation is increasingly popular [66,74,96].
However, existing simulators come with two significant
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4713
problems: (1) They are typically non-differentiable black
boxes , making them incompatible with existing optimiza-
tion and learning frameworks. Consequently, most meth-
ods [64, 95, 96] use them with reinforcement learning to
evaluate whether a certain input has the desired outcome,
but with no ability to reason about how changing inputs af-
fects the outputs. (2) They rely on an unrealistic proxy body
model for computational efficiency; bodies are represented
as groups of rigid 3D shape primitives. Such proxy models
are crude approximations of human bodies, which, in reality,
are much more complex and deform non-rigidly when they
move and interact. Moreover, proxies need a priori known
body dimensions that are kept fixed during simulation. Also,
these proxies differ significantly from the 3D body mod-
els [41, 54, 92] used by SOTA HPS methods. Thus, current
physics simulators are too limited for use in HPS.
What we need, instead, is a solution that is fully differen-
tiable, uses a realistic body model, and seamlessly integrates
physical reasoning into HPS methods (both optimization-
and regression-based). To this end, instead of using full
physics simulation, we introduce novel intuitive-physics (IP)
terms that are simple, differentiable, and compatible with a
body model like SMPL [54]. Specifically, we define terms
that exploit an inferred pressure heatmap of the body on the
ground plane, the Center of Pressure (CoP) that arises from
the heatmap, and the SMPL body’s Center of Mass (CoM )
projected on the floor; see Fig. 2 for a visualization. Intu-
itively, bodies whose CoM lie close to their CoP are more
stable than ones with a CoP that is further away (see Fig. 5);
the former suggests a static pose ,e.g.standing or holding a
yoga pose, while the latter a dynamic pose , e.g., walking.
We use these intuitive-physics terms in two ways. First,
we incorporate them in an objective function that extends
SMPLify-XMC [59] to optimize for body poses that are
stable. We also incorporate the same terms in the training
loss for an HPS regressor, called IPMAN (Intuitive-Physics-
based huMAN). In both formulations, the intuitive-physics
terms encourage estimates of body shape and pose that have
sufficient ground contact, while penalizing interpenetration
and encouraging an overlap of the CoP and CoM.
Our intuitive-physics formulation is inspired by work in
biomechanics [32, 33, 61], which characterizes the stability
of humans in terms of relative positions between the CoP,
theCoM , and the Base of Support (BoS). The BoS is de-
fined as the convex hull of all contact regions on the floor
(Fig. 2). Following past work [6,71,74], we use the “inverted
pendulum” model [85, 86] for body balance; this considers
poses as stable if the gravity-projected CoM onto the floor
lies inside the BoS. Similar ideas are explored by Scott et
al. [71] but they focus on predicting a foot pressure heatmap
from 2D or 3D body joints. We go significantly further to
exploit stability in training an HPS regressor. This requires
two technical novelties.
CoM
BoS
CoP
HIGH LOW Figure 2. (1) A SMPL mesh sitting. (2) The inferred pressure map
on the ground (color-coded heatmap), CoP (green), CoM (pink),
and Base of Support ( BoS, yellow polygon). (3) Segmentation of
SMPL intoNP= 10 parts, used for computing CoM ; see Sec. 3.2.
The first involves computing CoM . To this end, we uni-
formly sample points on SMPL ’s surface, and calculate each
body part’s volume. Then, we compute CoM as the average
of all uniformly sampled points weighted by the correspond-
ing part volumes. We denote this as pCoM , standing for
“part-weighted CoM ”. Importantly, pCoM takes into account
SMPL ’s shape, pose, and all blend shapes, while it is also
computationally efficient and differentiable.
The second involves estimating CoP directly from the
image, without access to a pressure sensor. Our key insight is
that the soft tissues of human bodies deform under pressure,
e.g., the buttocks deform when sitting. However, SMPL
does not model this deformation; it penetrates the ground
instead of deforming. We use the penetration depth as a
proxy for pressure [68]; deeper penetration means higher
pressure. With this, we estimate a pressure field on SMPL ’s
mesh and compute the CoP as the pressure-weighted average
of the surface points. Again this is differentiable.
For evaluation, we use a standard HPS benchmark
(Human3.6M [37]), but also the RICH [35] dataset. How-
ever, these datasets have limited interactions with the floor.
We thus capture a novel dataset, MoYo , of challenging yoga
poses, with synchronized multi-view video, ground-truth
SMPL-X [63] meshes, pressure sensor measurements, and
body CoM .IPMAN , in both of its forms, and across all
datasets, produces more accurate and stable 3D bodies than
the state of the art. Importantly, we find that IPMAN im-
proves accuracy for static poses, while not hurting dynamic
ones. This makes IPMAN applicable to everyday motions.
To summarize: (1) We develop IPMAN , the first HPS
method that integrates intuitive physics. (2) We infer biome-
chanical properties such as CoM ,CoP and body pressure.
(3) We define novel intuitive-physics terms that can be easily
integrated into HPS methods. (4) We create MoYo , a dataset
that uniquely has complex poses, multi-view video, and
ground-truth bodies, pressure, and CoM . (5) We show that
our IP terms improve HPS accuracy and physical plausibility.
(6) Data and code are available for research.
|
Tarchoun_Jedi_Entropy-Based_Localization_and_Removal_of_Adversarial_Patches_CVPR_2023 | Abstract
Real-world adversarial physical patches were shown to
be successful in compromising state-of-the-art models in a
variety of computer vision applications. Existing defenses
that are based on either input gradient or features analy-
sis have been compromised by recent GAN-based attacks
that generate naturalistic patches. In this paper, we pro-
pose Jedi, a new defense against adversarial patches that
is resilient to realistic patch attacks. Jedi tackles the patch
localization problem from an information theory perspec-
tive; leverages two new ideas: (1) it improves the identi-
fication of potential patch regions using entropy analysis :
we show that the entropy of adversarial patches is high,
even in naturalistic patches; and (2) it improves the local-
ization of adversarial patches, using an autoencoder that
is able to complete patch regions from high entropy ker-
nels. Jedi achieves high-precision adversarial patch local-
ization, which we show is critical to successfully repair the
images. Since Jedi relies on an input entropy analysis, it is
model-agnostic , and can be applied on pre-trained off-the-
shelf models without changes to the training or inference of
the protected models. Jedi detects on average 90% of ad-
versarial patches across different benchmarks and recovers
up to 94% of successful patch attacks (Compared to 75%
and 65% for LGS and Jujutsu, respectively).
| 1. Introduction
Deep neural networks (DNNs) are vulnerable to adver-
sarial attacks [20] where an adversary adds carefully crafted
imperceptible perturbations to an input (e.g., by lp-norm
bounded noise magnitude), forcing the models to misclas-
sify. Several adversarial noise generation methods havebeen proposed [4, 10], often as part of a cat and mouse
game where new defenses emerge [1, 18] only to be shown
vulnerable to new adaptive attacks. Under real-world con-
ditions, an attacker creates a physical patch (thus, spatially
constrained) that contains an adversarial pattern. Such a
patch can be placed as a sticker on traffic signs [9], worn
as part of an item of clothing [13, 21], or introduced us-
ing a display monitor [12], providing a practical approach
for attackers to carry out adversarial attacks. First proposed
by Brown et al. [3], these patches are different from tradi-
tional adversarial attacks in two primary ways: (1) they oc-
cupy a constrained space within an image; and (2) they may
not be noise budget-constrained within the patch. Several
adversarial patch generation methods have been demon-
strated [13, 14, 16, 21], many of which showcasing real-life
implementation, making them an ongoing threat to visual
ML systems.
Several approaches aiming to detect adversarial patches
and defuse their impact have been proposed [5, 11, 15, 17,
22, 24, 25]. One category of defenses attempts to locate
patches by detecting anomalies caused by the presence of
the patch. These anomalies can be identified in the input
pixel data such as in the case of Localized Gradient Smooth-
ing [17] where the patch is located using high pixel gradi-
ent values. Alternatively, they can be identified in the fea-
ture space where the adversarial patch can create irregular
saliency maps with regards to its targeted class that can be
exploited by defenses such as in Digital Watermarking [11]
and Jujutsu [5]. These defenses have two primary limita-
tions: (1) They are only moderately successful against base-
line attacks enabling recovery from many attacks (e.g., 75%
for LGS and 65% for Jujutsu); and (2) they are vulnera-
ble to adaptive attacks that generate naturalistic adversarial
patches that are meant to use patterns similar to natural im-
ages. Hu et al. [13] train a GAN to generate naturalistic
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4087
Successful attack: Undetected personDetection of entropypeaksFiltering + Auto-encoder for patch localization Inpainting the localizedPatch => recovered detectionFigure 1. Illustration on a patch attack against Yolo. Jedi surgically localizes the patch via entropy analysis and recovers the image.
patches, matching the visual properties of normal images,
and show that they are able to bypass defenses that are based
on either input or featu |
Sun_Correspondence_Transformers_With_Asymmetric_Feature_Learning_and_Matching_Flow_Super-Resolution_CVPR_2023 | Abstract
This paper solves the problem of learning dense visual
correspondences between different object instances of the
same category with only sparse annotations. We decompose
this pixel-level semantic matching problem into two easier
ones: (i) First, local feature descriptors of source and tar-
get images need to be mapped into shared semantic spaces
to get coarse matching flows. (ii) Second, matching flows
in low resolution should be refined to generate accurate
point-to-point matching results. We propose asymmetric
feature learning and matching flow super-resolution based
†: Corresponding Authorson vision transformers to solve the above problems. The
asymmetric feature learning module exploits a biased cross-
attention mechanism to encode token features of source im-
ages with their target counterparts. Then matching flow in
low resolutions is enhanced by a super-resolution network
to get accurate correspondences. Our pipeline is built upon
vision transformers and can be trained in an end-to-end
manner. Extensive experimental results on several popular
benchmarks, such as PF-PASCAL, PF-WILLOW, and SPair-
71K, demonstrate that the proposed method can catch sub-
tle semantic differences in pixels efficiently. Code is avail-
able on https://github.com/YXSUNMADMAX/ACTR.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17787
| 1. Introduction
Robust semantic matching methods aim to build dense
visual correspondences between different objects or scenes
from the same category regardless of the large variations
in appearances and layouts. These algorithms have been
widely exploited in various computer vision tasks, such as
object recognition [12,48], cosegmentation [1,40], few-shot
learning [20, 29], image editing [15, 18] and etc. Different
from classical dense matching tasks, such as stereo match-
ing [6,41] and image registration [2,19], semantic matching
aims to find the visual consistency in image pairs with large
intra-class appearance and layout variations.
State-of-the-art methods, such as Proposal Flow [16],
NCNet [37], Hyperpixel Flow [34], CATs [8] and etc, typ-
ically extract features from backbones to measure point-to-
point similarity in 4-D correlation tensors, and then refine
these 4-D tensors to enforce neighborhood matching con-
sistency. Despite that these algorithms have achieved im-
pressive results, there are still two key issues that haven’t
been discussed thoroughly. First, how to learn feature rep-
resentations appropriate for semantic correspondence? Sec-
ond, how to enforce neighborhood consensus when the 4-
D correlation tensors that are in high resolution? For the
first problem, Hyperpixel Flow [34] selects feature maps in
convolutional neural networks with Beam search [32], and
MMNet [49] aggregates feature maps at different resolu-
tions in a top-down manner to get feature maps in high res-
olution. For the second problem, V AT [20] firstly reduces
the resolutions of 4-D correlation tensors with a 4-D con-
volution operation and then conducts shifted window atten-
tion [31] to further reduce the computational cost. Although
these exploratory works brought a lot of new ideas, ques-
tions listed above still deserve to be discussed seriously.
In this paper, we aim to answer the above questions and
propose a novel pipeline for semantic correspondence. Our
pipeline consists of feature extraction based on pre-trained
feature backbone [5, 17, 50], asymmetric feature learning,
and matching flow super-resolution. Different from state-
of-the-art methods [8, 34, 49] which directly calculate 4-D
matching scores with refined generic backbone features, our
core idea focuses on finding a shared semantic space where
local feature descriptors of images can be aligned with
their to-match counterpart. The asymmetric feature learn-
ing module reconstructs source image features with target
image features to reduce domain discrepancy between the
two thus avoiding reconstructing source and target image
features synchronously as in [9, 28]. Meanwhile, to high-
light important image regions in target images, we also use
source images to identify discriminative parts of foreground
objects. In this way, a specific feature space is found for ev-
ery image pair to conduct semantic matching.
To avoid huge computational cost during neighborhood
consensus enhancement, we map the matching informationhidden in 4-D correlation matrices to 2-D matching flow
maps through the soft argmax [25]. By reducing the di-
mension of the optimization goal from 4-D to 2-D, com-
putational cost is alleviated drastically. To achieve pixel-
level correspondence, we conduct matching flow super-
resolution to enhance neighborhood consensus and improve
matching accuracy at the same time. We find that the pro-
posed method works quite well in conjunction with trans-
former feature backbones, such as MAE [17], DINO [5],
and iBOT [50], so we call the proposed method asym-
metric correspondence transformer, written as ACTRans-
former, and train it end-to-end. We summarize our contri-
butions as follows.
•We introduce a novel pipeline for semantic correspon-
dence which contains generic feature extraction, asymmet-
ric feature learning, and matching flow super-resolution. By
conducting asymmetric feature learning, we extract specific
features for every image pair and thus get more accurate
correspondences. Besides, we replace the 4-D correlation
refinement with the 2-D matching flow super-resolution,
which saves computational cost greatly.
•We propose asymmetric feature learning that can project
features of image pairs into a shared feature space easily and
reduce the feature ambiguity at the same time. For match-
ing flow super-resolution, we conduct multi-path super-
resolution to benefit from different matching tensors and
acquire significant improvements.
•Experiments on several popular benchmarks indicate that
the proposed ACTRansformer outperforms previous state-
of-the-art methods by clear margins. Qualitative results are
presenred in Figure 1.
|
Song_ObjectStitch_Object_Compositing_With_Diffusion_Model_CVPR_2023 | Abstract
Object compositing based on 2D images is a challeng-
ing problem since it typically involves multiple processing
stages such as color harmonization, geometry correction
and shadow generation to generate realistic results. Fur-
thermore, annotating training data pairs for compositing
requires substantial manual effort from professionals, and
is hardly scalable. Thus, with the recent advances in gen-
erative models, in this work, we propose a self-supervised
framework for object compositing by leveraging the power
of conditional diffusion models. Our framework can hol-
listically address the object compositing task in a unified
model, transforming the viewpoint, geometry, color and
shadow of the generated object while requiring no manual
labeling. To preserve the input object’s characteristics, we
introduce a content adaptor that helps to maintain categori-
* Work done during internship at Adobe.cal semantics and object appearance. A data augmentation
method is further adopted to improve the fidelity of the gen-
erator. Our method outperforms relevant baselines in both
realism and faithfulness of the synthesized result images in
a user study on various real-world images.
| 1. Introduction
Image compositing is an essential task in image editing
that aims to insert an object from a given image into another
image in a realistic way. Conventionally, many sub-tasks
are involved in compositing an object to a new scene, in-
cluding color harmonization [6, 7, 19, 51], relighting [52],
and shadow generation [16, 29, 43] in order to naturally
blend the object into the new image. As shown in Tab. 1,
most previous methods [6, 7, 16, 19, 28, 43] focus on a sin-
gle sub-task required for image compositing. Consequently,
they must be appropriately combined to obtain a composite
image where the input object is re-synthesized to have the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18310
Method Geometry Light Shadow View
ST-GAN [28] ✓ ✗ ✗ ✗
SSH [19] ✗ ✓ ✗ ✗
DCCF [51] ✗ ✓ ✗ ✗
SSN [43] ✗ ✗ ✓ ✗
SGRNet [16] ✗ ✗ ✓ ✗
GCC-GAN [5] ✓ ✓ ✗ ✗
Ours ✓ ✓ ✓ ✓
Table 1. Prior works only focus on one or two aspects of object
compositing, and they cannot synthesize novel views. In contrast,
our model can address all perspectives as listed.
color, lighting and shadow that is consistent with the back-
ground scene. As shown in Fig. 1, results produced in this
way still look unnatural, partly due to the viewpoint of the
inserted object being different from the overall background.
Harmonizing the geometry and synthesizing novel views
have often been overlooked in 2D image compositing,
which require an accurate understanding of both the geome-
try of the object and the background scene from 2D images.
Previous works [15, 21, 22] handle 3D object compositing
with explicit background information such as lighting posi-
tions and depth. More recently, [28] utilize GANs to esti-
mate the homography of the object. However, this method
is limited to placing furniture in indoor scenes. In this pa-
per, we propose a generic image compositing method that
is able to harmonize the geometry of the input object along
with color, lighting and shadow with the background image
using a diffusion-based generative model.
In recent years, generative models such as GANs [4,
10, 17, 20] and diffusion models [1, 14, 30, 32, 33, 38, 39]
have shown great potential in synthesizing realistic images.
In particular, diffusion model-based frameworks are versa-
tile and outperform various prior methods in image edit-
ing [1, 23, 32] and other applications [11, 31, 35]. How-
ever, most image editing diffusion models focus on using
text inputs to manipulate images [1,3,9,23,33], which is in-
sufficient for image compositing as verbal representations
cannot fully capture the details or preserve the identity and
appearance of a given object image. There have been re-
cent works [23, 39] focusing on generating diverse contexts
while preserving the key features of the object; however,
these models are designed for a different task than object
compositing. Furthermore, [23] requires fine-tuning the
model for each input object and [39] also needs to be fine-
tuned on multiple images of the same object. Therefore they
are limited for general object compositing.
In this work, we leverage diffusion models to simulta-
neously handle multiple aspects of image compositing such
as color harmonization, relighting, geometry correction and
shadow generation. With image guidance rather than text
guidance, we aim to preserve the identity and appearance ofthe original object in the generated composite image.
Specifically, our model synthesizes a composite image
given (i) a source object, (ii) a target background image,
and (iii) a bounding box specifying the location to insert
the object. The proposed framework consists of a con-
tent adaptor and a generator module: the content adaptor
is designed to extract a representation from the input object
containing both high-level semantics and low-level details
such as color and shape; the generator module preserves the
background scene while improving the generation quality
and versatility. Our framework is trained in a fully self-
supervised manner and no task-specific labeling is required
at any point during training. Moreover, various data aug-
mentation techniques are applied to further improve the fi-
delity and realism of the output. We evaluate our proposed
method on a real-world dataset closely simulating real use
cases for image compositing.
Our contributions are summarized as follows:
• We present the first diffusion model-based framework
forgenerative object compositing that can handle mul-
tiple aspects of compositing such as viewpoint, geom-
etry, lighting and shadow.
• We propose a content adaptor module which learns a
descriptive multi-modal embedding from images, en-
abling image guidance for diffusion models.
• Our framework is trained in a self-supervised man-
ner without any task-specific annotations, employing
data augmentation techniques to improve the fidelity
of generation.
• We collect a high-resolution real-world dataset for ob-
ject compositing with diverse images, containing man-
ually annotated object scales and locations.
|
Toschi_ReLight_My_NeRF_A_Dataset_for_Novel_View_Synthesis_and_CVPR_2023 | Abstract
In this paper, we focus on the problem of rendering novel
views from a Neural Radiance Field (NeRF) under unob-
served light conditions. To this end, we introduce a novel
dataset, dubbed ReNe ( Relighting NeRF), framing real
world objects under one-light-at-time (OLAT) conditions,
annotated with accurate ground-truth camera and light
poses. Our acquisition pipeline leverages two robotic arms
holding, respectively, a camera and an omni-directional
point-wise light source. We release a total of 20 scenes
depicting a variety of objects with complex geometry and
challenging materials. Each scene includes 2000 images,
acquired from 50 different points of views under 40 different
OLAT conditions. By leveraging the dataset, we perform an
ablation study on the relighting capability of variants of the
vanilla NeRF architecture and identify a lightweight archi-
tecture that can render novel views of an object under novel
light conditions, which we use to establish a non-trivial
baseline for the dataset. Dataset and benchmark are avail-
able at https://eyecan-ai.github.io/rene . | 1. Introduction
Inverse rendering [29, 47, 52, 73] addresses the problem
of estimating the physical attributes of an object, such as its
geometry, material properties and lighting conditions, from
a set of images or even just a single one. This task is a
longstanding problem for the vision and graphics commu-
nities, since it unlocks the creation of novel renderings of
an object from arbitrary viewpoints and under unobserved
lighting conditions. An effective and robust solution to this
problem would have significant value for a wide range of
applications in gaming, robotics and augmented reality.
Recently, Neural Radiance Fields (NeRF) [36] has con-
tributed tremendously to the novel view synthesis sub-task
of inverse rendering pipelines. By mapping an input 5D
vector (3D position and 2D viewing direction) to a 4D con-
tinuous field of volume density and color by means of a neu-
ral network, NeRF learns the geometry and appearance of a
single scene from a set of posed images. The appealing re-
sults in novel view synthesis have attracted a lot of attention
from the research community and triggered many follow-up
∗Joint first authorship.⋄Work done while at Eyecan.ai
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20762
DatasetMultiple
categoriesReal-WorldBackground
ShadowsPublicLight
Supervision
Gross et al. [14] ✗ ✓ ✗ ✓ ✓
Sunet al. [57] ✗ ✓ ✗ ✓ ✓
Wang et al. [68] ✗ ✓ ✗ ✓ ✓
Zhang et al. [74] ✗ ✓ ✗ ✓ ✓
Srinivasan et al. [55] ✓ ✗ ✗ ✓ ✓
Zhang et al. [75] ✓ ✓ ✓ ✓ ✗
Zhang et al. [75] ✓ ✗ ✗ ✓ ✓
Biet al. [2] ✓ ✓ ✗ ✗ ✓
ReNe ✓ ✓ ✓ ✓ ✓
Table 1. Overview of relighting datasets. Our dataset is the first
one featuring a variety of objects and materials captured with real-
world sensors that provides ground-truth light positions and also
presents challenging cast shadows.
works aimed at overcoming the main limitations of NeRF,
e.g. reduce inference runtime [23, 24, 27, 48, 49, 71], enable
modeling of deformable objects [9, 25, 42, 45, 46, 62], and
generalization to novel scenes [5,15,19,39,50,53,61,63,72].
However, less attention has been paid to the relighting abil-
ity of NeRFs. Although NeRF and its variants represent
nowadays the most compelling strategy for view synthe-
sis, the learned scene representation entangles material and
lighting and, thus, cannot be directly used to generate views
under novel, unseen lighting conditions. A few existing
works [3, 4, 55, 75] try to overcome this structural NeRF
limitation by learning to model the scene appearance as a
function of reflectance , which accounts for both scene ge-
ometry and lighting modeling. However, these methods in-
cur a great computation cost, mainly due to the need to ex-
plicitly model light visibility and/or the components of a
microfacet Bidrectional Reflectance Distribution Function
(BRDF) [65] like diffuse albedo, specular roughness, and
point normals: for instance NeRV [55] uses 128 TPU cores
for 1 day while [3] is trained on 4 GPUs for 2 days.
As highlighted in Tab. 1, one of the main challenges to
foster research in this direction is the absence of real-world
datasets featuring generic and varied objects with ground-
truth light direction, both at training and test time. The lat-
ter is key to create a realistic quantitative benchmark for
relighting methods, whose availability is one of the main
driving forces behind fast-paced development of a machine
learning topic. Indeed, the above mentioned NeRF-like
methods for relighting mainly consider a handful of syn-
thetic images to provide quantitative results (3 and 4 scenes
in NeRV [55] and NeRFactor [75], respectively) while no
quantitative results are provided in [3]. The only dataset
with scenes acquired by a real sensor is proposed in [2],
which, however, assumes images captured under collocated
view and lighting setup, i.e. a smartphone with flash light
on, which limits the amount of cast shadows and simplifies
the task. Moreover, the dataset is not publicly available.
Some available real-world datasets with ground-truth lightpositions feature human faces or portraits [14, 57, 68, 74].
In these datasets, the subject is usually seated in the center
of a light-stage with cameras arranged over a dome array
positioned in front of the subject. Although these dataset
provide real scenes with ground-truth annotations for light
positions, the background is masked-out and shadows cast
on the background, which are hard to model because they
require a precise knowledge about the geometry of the over-
all scene, are ignored.
Therefore, in this paper we try to answer the research
questions: can we design a data acquisition methodology
suitable to collect a set of images of an object under one-
light-at-time (OLAT) [74] illumination with high-quality
camera and light pose annotations which requires minimal
human supervision? We then leverage it to investigate a sec-
ond question: can we design a novel Neural Radiance Field
architecture to learn to perform relighting with reasonable
computational requirements? To answer the first question,
we design a capture system relying on two robotic arms.
While one arm holds the camera and shots pictures from
viewpoints uniformly distributed on a spherical wedge, the
other moves the light source across points uniformly dis-
tributed on a dome. As a result, we collect the ReNe dataset,
made of 20scenes framing daily objects with challenging
geometry, varied materials and visible cast shadows, com-
posed of 50camera view-points under 40OLAT light con-
ditions, i.e. 2000 frames per scene. Examples of images
from the dataset are shown in Fig. 1. With a subset of im-
ages from each scene, we create a novel hold-out dataset
for joint relighting and novel view synthesis evaluation that
will be used as an online benchmark to foster research on
this important topic. As regards the second question, thanks
to the new dataset we conduct a study on the relighting ca-
pability of NeRF. In particular, we investigate on how the
standard NeRF architecture can be modified to take into
account the position of the light when generating the ap-
pearance of a scene. Our study shows that by estimating
color with two separate sub-networks, one in charge of soft-
shadow prediction and one responsible for neurally approx-
imating the BRDF, we can perform an effective relighting,
e.g. cast complex shadows. We provide results of our novel
architecture as a reference baseline for the new benchmark.
In summary, our contributions include:
• a novel dataset made out of sets of OLAT images of
real-world objects, with accurate camera and light pose
annotations;
• a study comparing different approaches to enable
NeRF to perform relighting alongside novel view syn-
thesis;
• a new architecture, where the stage responsible for ra-
diance estimation is split into two separate networks,
20763
that can render novel views under novel unobserved
lighting conditions;
• a public benchmark for novel view synthesis and re-
lighting of real world objects, that will be maintained
on an online evaluation server.
|
Tang_A_New_Benchmark_On_the_Utility_of_Synthetic_Data_With_CVPR_2023 | Abstract
Deep learning in computer vision has achieved great
success with the price of large-scale labeled training data.
However, exhaustive data annotation is impracticable for
each task of all domains of interest, due to high labor costs
and unguaranteed labeling accuracy. Besides, the uncon-
trollable data collection process produces non-IID training
and test data, where undesired duplication may exist. All
these nuisances may hinder the verification of typical theo-
ries and exposure to new findings. To circumvent them, an
alternative is to generate synthetic data via 3D rendering
with domain randomization. We in this work push forward
along this line by doing profound and extensive research on
bare supervised learning and downstream domain adapta-
tion. Specifically, under the well-controlled, IID data set-
ting enabled by 3D rendering, we systematically verify the
typical, important learning insights, e.g., shortcut learning,
and discover the new laws of various data regimes and net-
work architectures in generalization. We further investigate
the effect of image formation factors on generalization, e.g.,
object scale, material texture, illumination, camera view-
point, and background in a 3D scene. Moreover, we use
the simulation-to-reality adaptation as a downstream task
for comparing the transferability between synthetic and real
data when used for pre-training, which demonstrates that
synthetic data pre-training is also promising to improve real
test results. Lastly, to promote future research, we develop
a new large-scale synthetic-to-real benchmark for image
classification, termed S2RDA, which provides more signifi-
cant challenges for transfer from simulation to reality.
| 1. Introduction
Recently, we have witnessed considerable advances in
various computer vision applications [16,29,38]. However,
such a success is vulnerable and expensive in that it has been
limited to supervised learning methods with abundant la-
*Corresponding author.beled data. Some publicly available datasets exist certainly,
which include a great mass of real-world images and ac-
quire their labels via crowdsourcing generally. For example,
ImageNet-1K [9] is of 1.28M images; MetaShift [26] has
2.56M natural images. Nevertheless, data collection and
annotation for all tasks of domains of interest are imprac-
tical since many of them require exhaustive manual efforts
and valuable domain expertise, e.g., self-driving and medi-
cal diagnosis. What’s worse, the label given by humans has
no guarantee to be correct, resulting in unpredictable label
noise. Besides, the poor-controlled data collection process
produces a lot of nuisances, e.g., training and test data aren’t
independent identically distributed (IID) and even have du-
plicate images. All of these shortcomings could prevent the
validation of typical insights and exposure to new findings.
To remedy them, one can resort to synthetic data gen-
eration via 3D rendering [10], where an arbitrary number
of images can be produced with diverse values of imaging
factors randomly chosen in a reasonable range, i.e., domain
randomization [48]; such a dataset creation pipeline is thus
very lucrative, where data with labels come for free. For
image classification, Peng et al. [32] propose the first large-
scale synthetic-to-real benchmark for visual domain adap-
tation [30], VisDA-2017; it includes 152K synthetic images
and55K natural ones. Ros et al. [37] produce 9K synthetic
cityscape images for cross-domain semantic segmentation.
Hinterstoisser et al. [19] densely render a set of 64retail ob-
jects for retail detection. All these datasets are customized
for specific tasks in cross-domain transfer. In this work, we
push forward along this line extensively and profoundly.
The deep models tend to find simple, unintended solu-
tions and learn shortcut features less related to the seman-
tics of particular object classes, due to systematic biases, as
revealed in [14]. For example, a model basing its prediction
on context would misclassify an airplane floating on water
as a boat. The seminal work [14] emphasizes that short-
cut opportunities are present in most data and rarely dis-
appear by simply adding more data. Modifying the training
data to block specific shortcuts may be a promising solution,
e.g., making image variation factors consistently distributed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15954
across all categories. To empirically verify the insight, we
propose to compare the traditional fixed-dataset periodic
training strategy with a new strategy of training with undu-
plicated examples generated by 3D rendering, under the
well-controlled, IID data setting. We run experiments on
three representative network architectures of ResNet [18],
ViT [12], and MLP-Mixer [49], which consistently show
obvious advantages of the data-unrepeatable training (cf.
Sec. 4.1). This also naturally validates the typical argu-
ments of probably approximately correct (PAC) generaliza-
tion [41] and variance-bias trade-off [11]. Thanks to the
ideal IID data condition enabled by the well-controlled 3D
rendering, we can also discover more reliable laws of var-
ious data regimes and network architectures in generaliza-
tion. Some interesting observations are as follows.
•Do not learn shortcuts! The test results on synthetic
data without background are good enough to show that
the synthetically trained models do not learn shortcut
solutions relying on context clues [14].
•A zero-sum game. For the data-unrepeatable train-
ing, IID and OOD (Out-of-Distribution [34, 55]) gen-
eralizations are some type of zero-sum game w.r.t. the
strength of data augmentation.
•Data augmentations do not help ViT much! In IID
tests, ViT performs surprisingly poorly whatever the
data augmentation is and even the triple number of
training epochs does not improve much.
•There is always a bottleneck from synthetic data to
OOD/real data . Here, increasing data size and model
capacity brings no more benefits, and domain adapta-
tion [56] to bridge the distribution gap is indispensable
except for evolving the image generation pipeline to
synthesize more realistic images.
Furthermore, we comprehensively assess image varia-
tion factors, e.g., object scale, material texture, illumina-
tion, camera viewpoint, and background in a 3D scene. We
then find that to generalize well, deep neural networks must
learn to ignore non-semantic variability , which may appear
in the test. To this end, sufficient images with different val-
ues of one imaging factor should be generated to learn a
robust, unbiased model, proving the necessity of sample di-
versity for generalization [42, 48, 55]. We also observe that
different factors and even their different values have uneven
importance to IID generalization , implying that the under-
explored weighted rendering [3] is worth studying.
Bare supervised learning on synthetic data results in poor
performance in OOD/real tests, and pre-training and then
domain adaptation can improve. Domain adaptation (DA)
[56] is a hot research area, which aims to make predictions
for unlabeled instances in the target domain by transferringknowledge from the labeled source domain. To our knowl-
edge, there is little research on pre-training for DA [24]
(with real data). We thus use the popular simulation-to-real
classification adaptation [32] as a downstream task, study
the transferability of synthetic data pre-trained models by
comparing with those pre-trained on real data like ImageNet
and MetaShift. We report results for several representative
DA methods [7,13,40,45,47] on the commonly used back-
bone, and our experiments yield some surprising findings.
•The importance of pre-training for DA. DA fails
without pre-training (cf. Sec. 4.3.1).
•Effects of different pre-training schemes. Different
DA methods exhibit different relative advantages un-
der different pre-training data. The reliability of exist-
ing DA method evaluation criteria is unguaranteed.
•Synthetic data pre-training vs. real data pre-
training. Synthetic data pre-training is better than pre-
training on real data in our study.
•Implications for pre-training data setting. Big Syn-
thesis Small Real is worth researching. Pre-train with
target classes first under limited computing resources.
•The improved generalization of DA models. Real
data pre-training with extra non-target classes, fine-
grained target subclasses, or our synthesized data
added for target classes helps DA.
Last but not least, we introduce a new, large-scale
synthetic-to-real benchmark for classification adaptation
(S2RDA), which has two challenging tasks S2RDA-49 and
S2RDA-MS-39. S2RDA contains more categories, more re-
alistically synthesized source domain data coming for free,
and more complicated target domain data collected from di-
verse real-world sources, setting a more practical and chal-
lenging benchmark for future DA research.
|
Tu_A_Bag-of-Prototypes_Representation_for_Dataset-Level_Applications_CVPR_2023 | Abstract
This work investigates dataset vectorization for two
dataset-level tasks: assessing training set suitability and
test set difficulty. The former measures how suitable a train-
ing set is for a target domain, while the latter studies how
challenging a test set is for a learned model. Central to
the two tasks is measuring the underlying relationship be-
tween datasets. This needs a desirable dataset vectoriza-
tion scheme, which should preserve as much discriminative
dataset information as possible so that the distance between
the resulting dataset vectors can reflect dataset-to-dataset
similarity. To this end, we propose a bag-of-prototypes
(BoP) dataset representation that extends the image-level
bag consisting of patch descriptors to dataset-level bag con-
sisting of semantic prototypes. Specifically, we develop a
codebook consisting of Kprototypes clustered from a ref-
erence dataset. Given a dataset to be encoded, we quan-
tize each of its image features to a certain prototype in the
codebook and obtain a K-dimensional histogram. Without
assuming access to dataset labels, the BoP representation
provides rich characterization of the dataset semantic dis-
tribution. Furthermore, BoP representations cooperate well
with Jensen-Shannon divergence for measuring dataset-to-
dataset similarity. Although very simple, BoP consistently
shows its advantage over existing representations on a se-
ries of benchmarks for two dataset-level tasks.
| 1. Introduction
Datasets are fundamental in machine learning research,
forming the basis of model training and testing [18, 51, 52,
61]. While large-scale datasets bring opportunities in al-
gorithm design, there lack proper tools to analyze and make
the best use of them [6,51,56]. Therefore, as opposed to tra-
ditional algorithm-centric research where improving mod-
els is of primary interest, the community has seen a grow-
ing interest in understanding and analyzing the data used for
developing models [51, 56]. Recent examples of such goal
include data synthesis [29], data sculpting [25,51], and data
valuation [6,32,56]. These tasks typically focus on individ-
ual sample of a dataset. In this work, we aim to understandnature of datasets from a dataset-level perspective.
This work considers two dataset-level tasks: suitability
in training and difficulty in testing. First , training set suit-
ability denotes whether a training set is suitable for training
models for a target dataset. In real-world applications, we
are often provided with multiple training sets from various
data distributions ( e.g., universities and hospitals). Due to
distribution shift, their trained models have different perfor-
mance on the target dataset. Then, it is of high practical
value to select the most suitable training set for the target
dataset. Second , test set difficulty means how challenging a
test set is for a learned model. In practice, test sets are usu-
ally unlabeled and often come from different distributions
than that of the training set. Measuring the test set difficulty
for a learned model helps us understand the model reliabil-
ity, thereby ensuring safe model deployment.
The core of the two dataset-level tasks is to measure the
relationship between datasets. For example, a training set
is more suitable for learning a model if it is more similar
to the target dataset. To this end, we propose a vectoriza-
tion scheme to represent a dataset. Then, the relationship
between a pair of datasets can be simply reflected by the
distance between their representations. Yet, it is challeng-
ing to encode a dataset as a representative vector, because
(i)a dataset has a different cardinality (number of images)
and(ii)each image has its own semantic content ( e.g., cate-
gory). It is thus critical to find an effective way to aggregate
all image features to uncover dataset semantic distributions.
In the literature, some researchers use the first few mo-
ments of distributions such as feature mean and co-variance
to represent datasets [20, 62, 74, 75, 82]. While being com-
putational friendly, these methods do not offer sufficiently
strong descriptive ability of a dataset, such as class distri-
butions, and thus have limited effectiveness in assessing at-
tributes related to semantics. There are also some methods
learn task-specific dataset representations [1, 63]. For ex-
ample, given a dataset with labels and a task loss function,
Task2Vec [1] computes an embedding based on estimates
of the Fisher information matrix associated with a probe
network’s parameters. While these task-specific represen-
tations are able to predict task similarities, they are not suit-
able for characterizing dataset properties of interest. They
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2881
require training a network on the specific task [1] or on mul-
tiple datasets [63], so they are not effective in assessing the
training set suitability. Additionally, they require image la-
bels for the specific task, so they cannot be used to measure
the difficulty of unlabeled test sets.
In this work, we propose a simple and effective bag-of-
prototypes (BoP) dataset representation. Its computation
starts with partitioning the image feature space into seman-
tic regions through clustering, where the region centers, or
prototypes, form a codebook. Given a new dataset, we
quantize its features to their corresponding prototypes and
compute an assignment histogram, which, after normaliza-
tion, gives the BoP representation. The dimensionality of
BoP equals the codebook size, which is usually a few hun-
dred and is considered memory-efficient. Meanwhile, the
histogram computed on the prototypes is descriptive of the
dataset semantic distribution.
Apart from being low dimensional and semantically rich,
BoP has a few other advantages. First , while recent works
in task-specific dataset representation usually require full
image annotations and additional learning procedure [1,63],
the computation of BoP does not rely on any. It is relatively
efficient and allows for unsupervised assessment of dataset
attributes. Second , BoP supports dataset-to-dataset similar-
ity measurement through Jensen-Shannon divergence. We
show in our experiment that this similarity is superior to
commonly used metrics such as Fr ´echet distance [27] and
maximum mean discrepancy [33] in two dataset-level tasks.
|
Tumanyan_Plug-and-Play_Diffusion_Features_for_Text-Driven_Image-to-Image_Translation_CVPR_2023 | Abstract
Large-scale text-to-image generative models have been
a revolutionary breakthrough in the evolution of genera-
tive AI, synthesizing diverse images with highly complex
visual concepts. However, a pivotal challenge in leverag-
ing such models for real-world content creation is provid-
ing users with control over the generated content. In this
paper, we present a new framework that takes text-to- im-
age synthesis to the realm of image-to-image translation –
given a guidance image and a target text prompt as input,
our method harnesses the power of a pre-trained text-to-
image diffusion model to generate a new image that com-
plies with the target text, while preserving the semantic lay-
out of the guidance image. Specifically, we observe and
empirically demonstrate that fine-grained control over the
generated structure can be achieved by manipulating spa-
tial features and their self-attention inside the model. This
results in a simple and effective approach, where features
extracted from the guidance image are directly injected into
the generation process of the translated image, requiring no
training or fine-tuning. We demonstrate high-quality results
on versatile text-guided image translation tasks, including
translating sketches, rough drawings and animations into
realistic images, changing the class and appearance of ob-
∗Equal contribution.jects in a given image, and modifying global qualities such
as lighting and color.
| 1. Introduction
With the rise of text-to-image foundation models –
billion-parameter models trained on a massive amount of
text-image data, it seems that we can translate our imagina-
tion into high-quality images through text [13, 35,37,41].
While such foundation models unlock a new world of cre-
ative processes in content creation, their power and expres-
sivity come at the expense of user controllability, which is
largely restricted to guiding the generation solely through
an input text. In this paper, we focus on attaining con-
trol over the generated structure and semantic layout of the
scene – an imperative component in various real-world con-
tent creation tasks, ranging from visual branding and mar-
keting to digital art. That is, our goal is to take text-to-image
generation to the realm of text-guided Image-to-Image (I2I)
translation, where an input image guides the layout (e.g., the
structure of the horse in Fig. 1), and the text guides the per-
ceived semantics and appearance of the scene (e.g., “robot
horse” in Fig. 1).
A possible approach for achieving control of the gen-
erated layout is to design text-to-image foundation models
that explicitly incorporate additional guiding signals, such
as user-provided masks [13, 29,35]. For example, recently
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1921
Make-A-Scene [13] trained a text-to-image model that is
also conditioned on a label segmentation mask, defining the
layout and the categories of objects in the scene. However,
such an approach requires an extensive compute as well as
large-scale text-guidance-image training tuples, and can be
applied at test-time to these specific types of inputs. In this
paper, we are interested in a unified framework that can be
applied to versatile I2I translation tasks, where the struc-
ture guidance signal ranges from artistic drawings to photo-
realistic images (see Fig. 1). Our method does not require
any training or fine-tuning, but rather leverages a pre-trained
andfixed text-to-image diffusion model [37].
We pose the fundamental question of how structure in-
formation is internally encoded in such a model. We dive
into the intermediate spatial features that are formed during
the generation process, empirically analyze them, and de-
vise a new framework that enables fine-grained control over
the generated structure by applying simple manipulations to
spatial features inside the model. Specifically, spatial fea-
tures and their self-attentions are extracted from the guid-
ance image, and are directly injected into the text-guided
generation process of the target image. We demonstrate that
our approach is not only applicable in cases where the guid-
ance image is generated from text, but also for real-world
images that are inverted into the model.
To summarize, we make the following key contributions:
(i) We provide new empirical insights about internal spatial
features formed during the diffusion process.
(ii) We introduce an effective framework that leverages the
power of pre-trained and fixed guided diffusion, allowing
to perform high-quality text-guided I2I translation without
any training or fine-tuning.
(iii) We show, both quantitatively and qualitatively that
our method outperforms existing state-of-the-art baselines,
achieving significantly better balance between preserving
the guidance layout and deviating from its appearance.
|
Tokmakov_Breaking_the_Object_in_Video_Object_Segmentation_CVPR_2023 | Abstract
The appearance of an object can be fleeting when it
transforms. As eggs are broken or paper is torn, their color,
shape and texture can change dramatically, preserving vir-
tually nothing of the original except for the identity itself.
Yet, this important phenomenon is largely absent from ex-
isting video object segmentation (VOS) benchmarks. In this
work, we close the gap by collecting a new dataset for
Video Object Segmentation under Transformations (VOST).
It consists of more than 700 high-resolution videos, cap-
tured in diverse environments, which are 20 seconds long on
average and densely labeled with instance masks. We adopt
a careful, multi-step approach to ensure that these videos
focus on complex object transformations, capturing their
full temporal extent. We then extensively evaluate state-
of-the-art VOS methods and make a number of important
discoveries. In particular, we show that existing methods
struggle when applied to this novel task and that their main
limitation lies in over-reliance on static appearance cues.
This motivates us to propose a few modifications for the top-
performing baseline that improve its capabilities by better
modeling spatio-temporal information. More broadly, our
work highlights the need for further research on learning
more robust video object representations.
Nothing is lost or created, all things are merely transformed.
Antoine Lavoisier
| 1. Introduction
Spatio-temporal cues are central in segmenting and
tracking objects in humans, with static appearance play-
ing only a supporting role [23, 27, 43]. In the most ex-
treme scenarios, we can even localize and track objects de-
fined by coherent motion alone, with no unique appearance
whatsoever [20]. Among other benefits, this appearance-
last approach increases robustness to sensory noise and
enables object permanence reasoning [41]. By contrast,
modern computer vision models for video object segmenta-
tion [3, 11, 44, 64] operate in an appearance-first paradigm.
Figure 1. Video frames from the DA VIS’17 dataset [42] (above),
and our proposed VOST (below). While existing VOS datasets
feature many challenges, such as deformations and pose change,
the overall appearance of objects varies little. Our work focuses on
object transformations, where appearance is no longer a reliable
cue and more advanced spatio-temporal modeling is required.
Indeed, the most successful approaches effectively store
patches with associated instance labels and retrieve the clos-
est patches to segment the target frame [11, 38, 44, 64].
What are the reasons for this stark disparity? While some
are algorithmic (e.g., object recognition models being first
developed for static images), a key reason lies in the datasets
we use. See for instance the “Breakdance” sequence from
the validation set of DA VIS’17 [42] in Figure 1: while
the dancer’s body experiences significant deformations and
pose changes, the overall appearance of the person remains
constant, making it an extremely strong cue.
However, this example – representative of many VOS
datasets – covers only a narrow slice of the life of an object.
In addition to translations, rotations, and minor deforma-
tions, objects can transform. Bananas can be peeled, paper
can be cut, clay can be molded into bricks, etc. These trans-
formations can dramatically change the color, texture, and
shape of an object, preserving virtually nothing of the orig-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22836
Figure 2. Representative samples from VOST with annotations at three different time steps (see video for full results). Colours indicate
instance ids, with grey representing ignored regions. VOST captures a wide variety of transformations in diverse environments and provides
pixel-perfect labels even for the most challenging sequences.
inal except for the identity itself (see Figure 1, bottom and
Figure 2). As we show in this paper, tracking object identity
through these changes is relatively easy for humans (e.g. la-
belers), but very challenging for VOS models. In this work,
we set out to fill this gap and study the problem of segment-
ing objects as they undergo complex transformations.
We begin by collecting a dataset that focuses on these
scenarios in Section 3. We capitalize on the recent large-
scale, ego-centric video collections [13, 21], which contain
thousands of examples of human-object interactions with
activity labels. We carefully filter these clips to only in-
clude major object transformations using a combination of
linguistic cues (change of state verbs [19, 29]) and man-
ual inspection. The resulting dataset, which we call VOST
(Video Object Segmentation under Transformations), con-
tains 713 clips, covering 51 transformations over 155 object
categories with an average video length of 21.2 seconds.
We then densely label these videos with more than 175,000
masks, using an unambiguous principle inspired by spatio-
temporal continuity: if a region is marked as an object in
the first frame of a video, all the parts that originate from it
maintain the same identity (see Figure 2).
Equipped with this unique dataset, we analyze state-
of-the-art VOS algorithms in Section 4. We strive to in-
clude a representative set of baselines that illustrates the
majority of the types of approaches to the problem in the
literature, including classical, first frame matching meth-
ods [61], local mask-propagation objectives [26], alter-
native, object-level architectures [3], and the mainstream
memory-based models [11,63–65]. Firstly, we observe that
existing methods are indeed ill-equipped for segmenting ob-
jects through complex transformations, as illustrated by the
large (2.3-12.5 times) gap in performance between VOST
and DA VIS’17 (see Table 2). A closer analysis of the results
reveals the following discoveries: (1) performance of the
methods is inversely proportional to their reliance on static
appearance cues; (2) progress on VOST can be achieved byimproving the spatio-temporal modeling capacity of exist-
ing architectures; (3) the problem is not easily solvable by
training existing methods on more data.
We conclude in Section 5 by summarizing the main chal-
lenges associated with modeling object transformations.
We hope that this work will motivate further exploration
into more robust video object representations. Our dataset,
source code, and models are available at vostdataset.org.
|
Sun_MISC210K_A_Large-Scale_Dataset_for_Multi-Instance_Semantic_Correspondence_CVPR_2023 | Abstract
Semantic correspondence have built up a new way for
object recognition. However current single-object match-
ing schema can be hard for discovering commonalities for
a category and far from the real-world recognition tasks. To
fill this gap, we design the multi-instance semantic corre-
spondence task which aims at constructing the correspon-
dence between multiple objects in an image pair. To sup-
port this task, we build a multi-instance semantic corre-
spondence (MISC) dataset from COCO Detection 2017 task
called MISC210K. We construct our dataset as three steps:
(1) category selection and data cleaning; (2) keypoint de-
sign based on 3D models and object description rules; (3)
human-machine collaborative annotation. Following these
steps, we select 34 classes of objects with 4,812 challenging
images annotated via a well designed semi-automatic work-
flow, and finally acquire 218,179 image pairs with instance
masks and instance-level keypoint pairs annotated. We de-
sign a dual-path collaborative learning pipeline to train
instance-level co-segmentation task and fine-grained level
correspondence task together. Benchmark evaluation and
further ablation results with detailed analysis are provided
with three future directions proposed. Our project is avail-
able on https://github.com/YXSUNMADMAX/MISC210K.
| 1. Introduction
Building dense visual correspondences is a sub-task of
image matching, which aims at finding semantic associ-
ations of salient parts and feature points of objects or
scenes [4,6,34,49]. This task has established a new way for
understanding commonalities among objects in a more fine-
grained manner and has been widely used for various com-
puter vision tasks, including few shot learning [11, 21, 48],
∗: Contribution Equally
†: Corresponding Authors
Figure 1. An instructive overview for our multi-instance semantic
correspondence task. In this task, models are required to determine
the corresponding relationships among multiple instances, where
instance masks are introduced for grouping matching key-points.
multi-object tracking [24], and image editing [10, 15, 32].
To learn general semantic correspondence, several popu-
lar datasets, such as Caltech-101 [14], FG3DCar [37], PF-
WILLOW [8], PF-PASCAL [8], and SPair-71k [27], have
been proposed by researchers to train machine learning
models. These datasets were designed to capture large intra-
class variations in color, scale, orientation, illumination and
non-rigid deformation. However, although these datasets
provide rich annotations, they are still far from real-world
applications because each object category is only allowed
to have at most one instance in each image. Moreover, for
most object recognition tasks and applications, multiple ob-
jects of the same category often appear at the same time.
Existing datasets only focus on one-to-one matching with-
out considering multi-instance scenes, and thus cannot be
used as simulations of real-world applications.
In this paper, we aim to reduce the gap between one-
to-one matching and many-to-many matching by building a
new multi-instance semantic correspondence dataset. Fol-
lowing PF-PASCAL [8] and SPair-71k [27], we label key-
points on objects to construct the dataset. There are sev-
eral key challenges during data labeling. First, how to
choose the collection of raw images that contain multiple
objects in natural scenes? Second, how to choose candi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7121
date object categories that have rich keypoints with dis-
criminative semantics? Third, how to ensure label quality
while maintaining high annotation efficiency? Last, how
to design an evaluation protocol for multi-instance seman-
tic correspondence? While addressing the above issues,
we construct a new multi-instance semantic correspondence
dataset, called MISC210K, which collects 34 different ob-
ject classes from COCO 2017 detection challenge [20] and
contains 218,179 image pairs with large variations in view-
point, scale, occlusion, and truncation. Compared with pop-
ular PF-PASCAL [8] and SPair-71k [27], MISC210K has
much more annotated instances, covers a boarder range of
object categories and presents a large number of many-to-
many matching cases. Besides, we also design a new pro-
tocol to evaluate many-to-many semantic matching algo-
rithms. All these characteristics make MISC210K appeal-
ing to the relevant research community.
We summarize main characteristics of MISC210K as
follows. First, MISC210K provides annotations for many-
to-many matching. Unlike previous datasets [8, 27] which
only exploit one-to-one matching, we find out all semantic
correspondences among multiple objects (up to 4) across
image pairs as shown in Figure 1. Second, MISC210K has
more complicated annotations. The number of keypoints in
SPair-71k varies from 3 to 30 across categories. In con-
trast, we well design more keypoints to highlight object
contours, skeletal joints, and other distinctive feature points
that can characterize objects in detail. Third, MISC210K
has a larger scale in comparison to existing datasets. It
contains 218,179 image pairs across 34 object categories,
which is three times larger than the previous largest dataset,
SPair-71k. Fourth, intra-class variations in MISC210K are
more challenging. In addition to variations considered in
Spair-71k [26], we also introduce more challenging varia-
tions, such as mutual occlusion of multiple objects and per-
spective distortions in complex scenes.
To investigate whether MISC210K can help learn gen-
eral correspondences across multiple object instances, we
evaluate previous state-of-the-art methods, MMNet [49],
CATs [4] on MSIC210K. We also propose a dual-path
multi-task learning pipeline to solve the complicated multi-
instance semantic correspondence problem. For both tasks
of correspondence and instance co-segmentation, we de-
signed multi-instance PCK (mPCK) and mIOU (instance)
from works [8,25,49]. According to the results, we identify
new challenges in this task: (1) extracting discriminative
features plays a precursory role to find out commonalities
across multiple objects; (2) the uncertainty in the number
of matching keypoints makes the matching process more
difficult; (3) multiple object instances bring occlusion, in-
terlacing, and other challenging issues. These observations
indicate that multi-instance semantic correspondence is a
challenging problem deserving further investigation.This paper is organized as follows. We first describe
the MISC210K dataset, its collection process, and statistics.
Then we introduce a generic framework for multi-instance
semantic correspondence, which enables neural networks
to associate salient feature points of object instances across
different images. The proposed dual-path collaborative
learning (DPCL) pipeline outperforms the transfer of pre-
vious one-to-one semantic correspondence algorithms. We
further analyze the characteristics of MISC210K and dis-
cuss key issues in multi-instance semantic correspondence.
|
Sun_Co-Speech_Gesture_Synthesis_by_Reinforcement_Learning_With_Contrastive_Pre-Trained_Rewards_CVPR_2023 | Abstract
There is a growing demand of automatically synthesizing
co-speech gestures for virtual characters. However, it re-
mains a challenge due to the complex relationship between
input speeches and target gestures. Most existing works fo-
cus on predicting the next gesture that fits the data best,
however, such methods are myopic and lack the ability to
plan for future gestures. In this paper, we propose a novel
reinforcement learning (RL) framework called RACER to
generate sequences of gestures that maximize the overall
satisfactory. RACER employs a vector quantized varia-
tional autoencoder to learn compact representations of ges-
tures and a GPT-based policy architecture to generate co-
herent sequence of gestures autoregressively. In particular,
we propose a contrastive pre-training approach to calculate
the rewards, which integrates contextual information into
action evaluation and successfully captures the complex re-
lationships between multi-modal speech-gesture data. Ex-
perimental results show that our method significantly out-
performs existing baselines in terms of both objective met-
rics and subjective human judgements. Demos can be found
athttps://github.com/RLracer/RACER.git .
| 1. Introduction
Gesturing is important for human speakers to improve
their expressiveness. It conveys the necessary non-verbal
information to the audience, giving the speech a more emo-
tional touch so that to enhance persuasiveness and credibil-
ity. Similarly, in virtual world, high-quality 3D gesture ani-
mations can make a talking character more vividly. For ex-
ample, in human-computer interaction scenarios, vivid ges-
tures performed by virtual characters can help listeners to
concentrate and improve the intimacy between humans and
*Equal contribution.
†Corresponding author.characters [ 36]. Attracted by these merits, there has been a
growing demand of automatically synthesizing high-quality
co-speech gestures in computer animation.
However, automatically synthesizing co-speech gestures
remains a challenge due to the complicated relationship be-
tween speech audios and gestures. On one hand, a speaker
may play different gestures when speaking the same words
due to different mental and physical states. On the other
hand, a speaker may also play similar gestures when speak-
ing different words. Therefore, co-speech gesture synthe-
sizing is inherently a “many-to-many” problem [ 21]. More-
over, in order to ensure the overall fluency and consistency,
we must take into consideration both the contextual infor-
mation and the subsequent effect upon playing a gesture
[23,38]. Therefore, gesture synthesizing is rather a sequen-
tial decision making problem than a simple matching be-
tween speeches and gestures.
Compared with traditional rule-based approaches [ 35],
data-driven gesture synthesis approaches [ 10,39] has shown
many advantages, including the low development cost and
the ability to generalize. Most existing data-driven ap-
proaches consider gesture synthesis as a classification task
[6,26] or a regression task [ 17] in a deterministic way i.e.
the same speech or text input always maps to the same ges-
ture output. However, these models rely on the assumption
that there exists a unique ground-truth label for each in-
put sequence of speech, which contradicts to the “many-to-
many” nature of the problem. As a consequence, they sac-
rifice diversity and semantics of gestures and tend to learn
some averaged gestures for any input speeches. Some other
works adopt adversarial learning framework, where a dis-
criminator is trained to distinguish between generated ges-
tures and the recorded gestures in dataset [ 10,38]. Although
they improve the generalizability of the gesture generator to
some extent, they still fail to explore the essential relation-
ship between speeches and gestures.
In order to address the above challenges, we propose
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2331
a novel Reinforcement leArning framework with Contra-
sitive prE-trained Rewards (RACER) for generating high-
quality co-speech gestures. RACER is trained in an of-
fline manner but can be used to generate the next gesture
for a speaking character in real time. RACER consists of
the following three components. Firstly, in order to ex-
tract meaningful gestures from the infinite action space,
RACER adopts a vector quantized variational autoencoder
(VQ-V AE) [ 32] to learn compact gesture representations,
which significantly reduces the action space. Secondly,
we construct the Q-value network using a GPT-based [ 28]
model, which has natural advantages of generating coher-
ence sequence of gestures. Thirdly, inspired by the con-
trastive language-image pre-training (CLIP) [ 27] and recent
advances of contrastive learning [ 30], we propose a con-
trastive speech-gesture pre-training method to compute the
rewards, which guide the RL agent to explore sophisticated
relations between speeches and gestures. Note that these
rewards evaluate the quality of gestures as a sequence. By
contrast, in conventional supervised learning frameworks,
the focus is on predicting only the next gesture, ignoring
the quality of generated sequence as a whole. To sum up,
the core contributions of this paper are as follows:
(1)We formally model the co-speech gesture synthesis
problem as a Markov decision process and propose
a novel RL based approach called RACER to learn
the optimal gesture synthesis policy. RACER can be
trained in an offline manner and used to synthesis co-
speech gestures in real time.
(2)We introduce VQ-V AE to encode and quantize the
motion segments to a codebook, which significantly
reduces the action space and facilitates the RL phase.
(3)We propose a contrastive speech-gesture pre-training
method to compute the rewards, which guide the RL
agent to discover deeper relations from multi-modal
speech-gesture data.
(4)Extensive experimental results show that RACER
outperforms the existing baselines in terms of both
objective metrics and subjective human judgements.
This demonstrates the superiority and the potential of
RL in co-speech gesture synthesis tasks.
|
Touvron_Co-Training_2L_Submodels_for_Visual_Recognition_CVPR_2023 | Abstract
We introduce submodel co-training, a regularization
method related to co-training, self-distillation and stochas-
tic depth. Given a neural network to be trained, for each
sample we implicitly instantiate two altered networks, “sub-
models”, with stochastic depth: we activate only a subset
of the layers. Each network serves as a soft teacher to the
other, by providing a loss that complements the regular loss
provided by the one-hot label. Our approach, dubbed “co-
sub”, uses a single set of weights, and does not involve a
pre-trained external model or temporal averaging.
Experimentally, we show that submodel co-training is
effective to train backbones for recognition tasks such as
image classification and semantic segmentation. Our ap-
proach is compatible with multiple architectures, including
RegNet, ViT, PiT, XCiT, Swin and ConvNext. Our training
strategy improves their results in comparable settings. For
instance, a ViT-B pretrained with cosub on ImageNet-21k
obtains 87.4% top-1 acc. @448 on ImageNet-val.
| 1. Introduction
Although the fundamental ideas of deep trainable neural
networks have been around for decades, only recently have
barriers been removed to allow breakthroughs in success-
fully training deep neural architectures in practice. Many of
these barriers are related to non-convex optimization in one
way or another, which is central to the success of modern
neural networks. The optimization challenges have been
addressed from multiple angles in the literature. First, mod-
ern architectures are designed to facilitate the optimization
of very deep networks. An exceptionally successful design
principle is using residual connections [ 24,25]. Although
this does not change the expressiveness of the functions that
the network can implement, the improved gradient flow al-
leviates, to some extent, the difficulties of optimizing very
deep networks. Another key element to the optimization is
the importance of data, revealed by the step-change in vi-
sual recognition performance resulting from the ImageNet
dataset [ 11], and the popularization of transfer learning with
pre-training on large datasets [ 39,58].θ<latexit sha1_base64="UakMuYN+OmatTz2zliypy+NMzBA=">AAACSHicdVDLahtBEJxVXo7yku1jCAyRAz4tu4oT+WjIJUcHItugXUTvbMsaNI9lpteOWPZbfHV+JX+Qv8gt5JZZWYE4j4Zhqqu66aKKSklPSfI16t25e+/+g62H/UePnzx9NtjeOfG2dgInwirrzgrwqKTBCUlSeFY5BF0oPC2W7zr99AKdl9Z8pFWFuYZzI+dSAAVqNtjNCqtKv9LhazJaIEE7GwyT+CBNxqMx/xukcbKuIdvU8Ww7epGVVtQaDQkF3k/TpKK8AUdSKGz7We2xArGEc5wGaECjz5u1+5a/CkzJ59aFZ4iv2d83GtC+MxgmNdDC/6l15L+0aU3zw7yRpqoJjbg5NK8VJ8u7KHgpHQpSqwBAOBm8crEAB4JCYLeuFNYuCQrf9vuZwUthtQZTNpmnUktT+3aa5qEToLCwn5okfvumbfaySvNhutd2gf5Kjf8fnIzi9HU8+nAwPNrfRLvFnrOXbJ+lbMyO2Ht2zCZMsBW7Ytfsc/Ql+hZ9j37cjPaizc4uu1W93k8/QrG7</latexit>T<latexit sha1_base64="eKHKbIM1GmwN+GJF4oitKCUY1GQ=">AAACPXicdVBNTxRBEO1B0XVFBTkak44LCafJzIosRxIvHteEBeLOhNT01EJn+2PSXYNuJvMvuMJf8Xf4A7gZr17tXdZERF/SyatXVanXr6iU9JQk36KVBw9XHz3uPOk+XXv2/MX6xssjb2sncCSssu6kAI9KGhyRJIUnlUPQhcLjYvp+3j++QOelNYc0qzDXcGbkRAqgIH3KNNC5AMUPT9d7SbybJoP+gN8naZws0GNLDE83otdZaUWt0ZBQ4P04TSrKG3AkhcK2m9UeKxBTOMNxoAY0+rxZWG75dlBKPrEuPEN8of650YD2fqaLMDm36P/uzcV/9cY1TfbzRpqqJjTi9tCkVpwsn/+fl9KhIDULBISTwSsX5+BAUEjpzpXC2ilB4dtuNzP4WVitwZRN5qnU0tS+Had5qEJ6WNgvTRLvvWubrazSvJdutW0I9Hdq/P/kqB+nb+P+x93ewc4y2g57xd6wHZayATtgH9iQjZhghl2yK3YdfY1uou/Rj9vRlWi5s8nuIPr5C2MJrdw=</latexit>T<latexit sha1_base64="eKHKbIM1GmwN+GJF4oitKCUY1GQ=">AAACPXicdVBNTxRBEO1B0XVFBTkak44LCafJzIosRxIvHteEBeLOhNT01EJn+2PSXYNuJvMvuMJf8Xf4A7gZr17tXdZERF/SyatXVanXr6iU9JQk36KVBw9XHz3uPOk+XXv2/MX6xssjb2sncCSssu6kAI9KGhyRJIUnlUPQhcLjYvp+3j++QOelNYc0qzDXcGbkRAqgIH3KNNC5AMUPT9d7SbybJoP+gN8naZws0GNLDE83otdZaUWt0ZBQ4P04TSrKG3AkhcK2m9UeKxBTOMNxoAY0+rxZWG75dlBKPrEuPEN8of650YD2fqaLMDm36P/uzcV/9cY1TfbzRpqqJjTi9tCkVpwsn/+fl9KhIDULBISTwSsX5+BAUEjpzpXC2ilB4dtuNzP4WVitwZRN5qnU0tS+Had5qEJ6WNgvTRLvvWubrazSvJdutW0I9Hdq/P/kqB+nb+P+x93ewc4y2g57xd6wHZayATtgH9iQjZhghl2yK3YdfY1uou/Rj9vRlWi5s8nuIPr5C2MJrdw=</latexit>target2
target1
1−λλ
submodel2submodel1
θ1
<latexit sha1_base64="vDhJRxCARDrEDJh8J1+qCQooVZs=">AAACSnicdVBdaxNBFJ2NVWv8aKr4JMJgKvRp2d02Jn0r+OJjBdMWsku4OztphszHMnNXDcP+GF/1r/gH/Bu+iS/OphGs6IVhzj3nXu7hlLUUDpPkW9S7tXP7zt3de/37Dx4+2hvsPz53prGMT5mRxl6W4LgUmk9RoOSXteWgSskvytXrTr94z60TRr/Ddc0LBVdaLAQDDNR88DQvjazcWoXP57jkCPO0nQ+GSZwlJ9nkmCbx0SiAUQDjdHIyzmgaJ5sakm2dzfej53llWKO4RibBuVma1Fh4sCiY5G0/bxyvga3gis8C1KC4K/zGf0tfBqaiC2PD00g37J8bHpTrLIZJBbh0f2sd+S9t1uBiUnih6wa5ZteHFo2kaGgXBq2E5QzlOgBgVgSvlC3BAsMQ2Y0rpTErhNK1/X6u+QdmlAJd+dxhpYRuXDtLi9AxkLw0H30Svxq1/iCvFR2mB20X6O/U6P/BeRanR3H29nh4eriNdpc8Iy/IIUnJmJySN+SMTAkjnnwin8mX6Gv0PfoR/bwe7UXbnSfkRvV2fgHPwrJ7</latexit> θ2
<latexit sha1_base64="pBZz6yBzS2GzIG2wxfqacBFCUHs=">AAACSnicdVBdaxNBFJ2NVWv8aKr4JMJgKvRp2d02Jn0r+OJjBdMWskuYnb1phszHMnNXDcP+GF/1r/gH/Bu+iS/OphGs6IVhzj3nXu7hlLUUDpPkW9S7tXP7zt3de/37Dx4+2hvsPz53prEcptxIYy9L5kAKDVMUKOGytsBUKeGiXL3u9Iv3YJ0w+h2uaygUu9JiITjDQM0HT/PSyMqtVfh8jktANs/a+WCYxFlykk2OaRIfjQIYBTBOJyfjjKZxsqkh2dbZfD96nleGNwo0csmcm6VJjYVnFgWX0PbzxkHN+IpdwSxAzRS4wm/8t/RlYCq6MDY8jXTD/rnhmXKdxTCpGC7d31pH/kubNbiYFF7oukHQ/PrQopEUDe3CoJWwwFGuA2DciuCV8iWzjGOI7MaV0pgVstK1/X6u4QM3SjFd+dxhpYRuXDtLi9BxJqE0H30Svxq1/iCvFR2mB20X6O/U6P/BeRanR3H29nh4eriNdpc8Iy/IIUnJmJySN+SMTAknnnwin8mX6Gv0PfoR/bwe7UXbnSfkRvV2fgHRk7J8</latexit>
augmentationlabel “tree”
Figure 1. Co-training of submodels (cosub): for each image, two sub-
models are sampled by randomly dropping layers of the full model. The
training signal for each submodel mixes the cross-entropy loss from the
image label with a self-distillation loss obtained from the other submodel.
However, even when (pre-)trained with millions of im-
ages, recent deep networks with millions if not billions
of parameters, are still heavily overparameterized. Tradi-
tional regularization like weight decay, dropout [ 46], or la-
bel smoothing [ 47] are limited in their ability to address
this issue. Data-augmentation strategies, including those
mixing different images like Mixup [ 61] and CutMix [ 60],
have proven to provide a complementary data-driven form
of regularization. More recently, multiple works propose
to resort to self-supervised pre-training. These approaches
rely on a proxy objective that generally provides more su-
pervision signal than the one available from labels, like in
recent (masked) auto-encoders [ 5,16,22], which were popu-
lar in the early deep learning literature [ 7,19,27]. Similarly,
contrastive approaches [ 23] or self-distillation [ 9] provide a
richer supervision less prone to supervision collapse [ 12].
Overall, self-supervised learning makes it possible to learn
larger models with less data, possibly reducing the need of
a pre-training stage [ 15].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11701
Distillation is a complementary approach to improve op-
timization. Distillation techniques were originally devel-
oped to transfer knowledge from a teacher model to a stu-
dent model [ 4,28], allowing the student to improve over
learning from the data directly. In contrast to traditional
distillation, co-distillation does not require pre-training a
(strong) teacher. Instead, a pool of models supervise each
other. Practically, it faces several limitations, including the
difficulty of jointly training more than two students for com-
plexity reasons, as it involves duplicating the weights.
In this paper, we propose a practical way to enable co-
training for a very large number of students. We consider
a single target model to be trained, and we instantiate two
submodels on-the-fly , simply by layerwise dropout [ 20,31].
This gives us two neural networks through which we can
backpropagate to the shared parameters of the target model.
In addition to the regular training loss, each submodel
serves as a teacher to the other, which provides an addi-
tional supervision signal ensuring the consistency across the
submodels. Our approach is illustrated in Figure 1: the pa-
rameterλcontrols the importance of the co-training loss
compared to the label loss, and our experiments show that
it significantly increases the final model accuracy.
This co-training across different submodels, which we
refer to as cosub , can be regarded as a massive co-training
between2Lmodels that share a common set of parameters,
whereLis the number of layers in the target architecture.
The target model can be interpreted as the expectation of all
models. With a layer drop-rate set to 0.5, for instance for
a ViT-H model, all submodels are equiprobable, and then it
amounts to averaging the weights of 22×32models.
Our contributions can be summarized as follows:
• We introduce a novel training approach for deep neu-
ral networks: we co-train submodels. This signifi-
cantly improves the training of most models, establish-
ing the new state of the art in multiple cases. For in-
stance, after pre-training ViT-B on Imagenet-21k and
fine-tuning it at resolution 448, we obtain 87.4% top-1
accuracy on Imagenet-val.
• We provide an efficient implementation to subsample
models on the fly. It is a simple yet effective variation
of stochastic depth [ 31] to drop residual blocks.
• We provide multiple analyses and ablations. Notice-
ably, we show that our submodels are effective models
by themselves even with significant trimming, similar
to LayerDrop [ 20] in natural language processing.
• We validate our approach on multiple architectures
(like ViT, ResNet, RegNet, PiT, XCiT, Swin, Con-
vNext), both for image classification –trained from
scratch or with transfer–, and semantic segmentation.
• We will share models/code for reproducibility in the
DeiT repository . |
Stergiou_The_Wisdom_of_Crowds_Temporal_Progressive_Attention_for_Early_Action_CVPR_2023 | Abstract
Early action prediction deals with inferring the ongoing
action from partially-observed videos, typically at the out-
set of the video. We propose a bottleneck-based attention
model that captures the evolution of the action, through pro-
gressive sampling over fine-to-coarse scales. Our proposed
Temporal Progressive (TemPr) model is composed of mul-
tiple attention towers, one for each scale. The predicted
action label is based on the collective agreement consider-
ing confidences of these towers. Extensive experiments over
four video datasets showcase state-of-the-art performance
on the task of Early Action Prediction across a range of en-
coder architectures. We demonstrate the effectiveness and
consistency of TemPr through detailed ablations.†
| 1. Introduction
Early action prediction (EAP) is the task of inferring
the action label corresponding to a given video, from only
partially observing the start of that video. Interest in EAP
has increased in recent years due to both the ever-growing
number of videos recorded and the requirement of pro-
cessing them with minimal latency. Motivated by the ad-
vances in action recognition [6, 57], where the entire video
is used to recognize the action label, recent EAP meth-
ods [3,15,34,45,60] distill the knowledge from these recog-
nition models to learn from the observed segments. Despite
promising results, the information that can be extracted
from partial and full videos is inevitably different. We in-
stead focus on modeling the observed partial video better.
Several neurophysiological studies [11, 29] have sug-
gested that humans understand actions in a predictive and
not reactive manner. This has resulted in the direct match-
ing hypothesis [18, 46] where, actions are believed to be
perceived through common patterns. Encountering any of
these patterns prompts the expectation of specific action(s),
even before the action is completed. Although the early pre-
*Work carried out while A. Stergiou was at University of Bristol
†Code is available at: https://tinyurl.com/temprog
Figure 1. Early action prediction with TemPr involves the use
of multiple scales for extracting features over partially observed
videos. Encoded spatio-temporal features are attended by distinct
transformer towers ( T) at each scale. We visualize two scales,
where the fine scale Tipredicts ‘hold plate’, and the coarse scale
Ti+1predicts ‘hold sponge’. Informative cues from both scales
are combined for early prediction of the action ‘wash plate’.
diction of actions is an inherent part of human cognition, the
task remains challenging for computational modeling.
Motivated by the direct matching hypothesis, we propose
a Temporally Progressive (TemPr) approach to modeling
partially observed videos. Inspired by multi-scale repre-
sentations in images [7, 69] and video [27, 62], we repre-
sent the observed video by a set of sub-sequences of tem-
porally increasing lengths as in Figure 1, which we refer
to as scales. TemPr uses distinct transformer towers over
each video scale. These utilize a shared latent-bottleneck
for cross-attention [28, 37], followed by a stack of self-
attention blocks to concurrently encode and aggregate the
input. From tower outputs, a shared classifier produces la-
bel predictions for each scale. Labels are aggregated based
on their collective similarity and individual confidences.
In summary, our contributions are as follows: (i) We
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14709
propose a progressive fine-to-coarse temporal sampling ap-
proach for EAP. (ii) We use transformer towers over sam-
pled scales to capture discriminative representations and
adaptively aggregate tower predictions, based on their con-
fidence and collective agreement. (iii) We evaluate the
effectiveness of our approach over four video datasets:
UCF-101 [53], EPIC-KITCHENS [8], NTU-RGB [51] and
Something-Something (sub-21 & v2) [21], consistently out-
performing prior work.
|
Su_Towards_All-in-One_Pre-Training_via_Maximizing_Multi-Modal_Mutual_Information_CVPR_2023 | Abstract
To effectively exploit the potential of large-scale mod-
els, various pre-training strategies supported by massive
data from different sources are proposed, including su-
pervised pre-training, weakly-supervised pre-training, and
self-supervised pre-training. It has been proved that com-
bining multiple pre-training strategies and data from var-
ious modalities/sources can greatly boost the training of
large-scale models. However, current works adopt a multi-
stage pre-training system, where the complex pipeline may
increase the uncertainty and instability of the pre-training.
It is thus desirable that these strategies can be integrated
in a single-stage manner. In this paper, we first pro-
pose a general multi-modal mutual information formula
as a unified optimization target and demonstrate that all
mainstream approaches are special cases of our frame-
work. Under this unified perspective, we propose an all-in-
one single-stage pre-training approach, named Maximizing
Multi-modal Mutual Information Pre-training ( M3I Pre-
training ). Our approach achieves better performance than
previous pre-training methods on various vision bench-
marks, including ImageNet classification, COCO object de-
tection, LVIS long-tailed object detection, and ADE20k se-
mantic segmentation. Notably, we successfully pre-train a
billion-level parameter image backbone and achieve state-
of-the-art performance on various benchmarks under pub-
lic data setting. Code shall be released at https://
github.com/OpenGVLab/M3I-Pretraining .
| 1. Introduction
In recent years, large-scale pre-trained models [5,13,27,
30, 37, 55, 65, 91] have swept a variety of computer vision
∗Equal contribution.†This work is done when Weijie Su and Chenxin
Tao are interns at Shanghai Artificial Intelligence Laboratory. Corre-
sponding author.
Self-supervised Pre -training (intra view)
M3I Pre -training (ours)Supervised Pre -training
input𝑥
target𝑦image backbone 𝑓𝜃
category
embedding 𝑓𝜙GAP+Linear 𝑓𝜓
softmax cross -
entropy loss
Weakly -supervised Pre -training
Self-supervised Pre -training (inter view)predicted feature Ƹ𝑧𝑦
Doginpu tfeature 𝑧𝑥
targe tfeature 𝑧𝑦𝐦𝐚𝐱𝑰(𝒛𝒙;𝒛𝒚)
input𝑥
target𝑦image backbone 𝑓𝜃
text backbone
+ GAP 𝑓𝜙GAP 𝑓𝜓
softmax cross -
entropy losspredicted feature Ƹ𝑧𝑦
A dog by
theseainpu tfeature 𝑧𝑥
targe tfeature 𝑧𝑦𝐦𝐚𝐱𝑰(𝒛𝒙;𝒛𝒚)
input𝑥
target𝑦image backbone 𝑓𝜃
imag ebackbone /
Identity + (GAP) 𝑓𝜙Transformer/
MLP 𝑓𝜓
L2-normlosspredicted feature Ƹ𝑧𝑦 inpu tfeature 𝑧𝑥
targe tfeature 𝑧𝑦𝐦𝐚𝐱𝑰(𝒛𝒙;𝒛𝒚)
input𝑥
target𝑦image backbone 𝑓𝜃
imag ebackbone /
Identity + (GAP) 𝑓𝜙Transformer/
MLP 𝑓𝜓
L2-normlosspredicted feature Ƹ𝑧𝑦 inpu tfeature 𝑧𝑥
targe tfeature 𝑧𝑦𝐦𝐚𝐱𝑰(𝒛𝒙;𝒛𝒚)
input𝑥
target𝑦image backbone 𝑓𝜃
image/text
backbone+(GAP) 𝑓𝜙Transformer/
GAP+MLP 𝑓𝜓
L2-norm/
softmax CE losspredicted feature Ƹ𝑧𝑦 inpu tfeature 𝑧𝑥
targe tfeature 𝑧𝑦𝐦𝐚𝐱𝑰(𝒛𝒙;𝒛𝒚)
Figure 1. Comparison between different pre-training paradigms
and M3I Pre-training. Existing pre-training methods are all opti-
mizing the mutual information between the input and target repre-
sentations, which can be integrated by M3I Pre-training. “GAP”
refers to global average pooling.
tasks with their strong performance. To adequately train
large models with billions of parameters, researchers design
various annotation-free self-training tasks and obtain suffi-
ciently large amounts of data from various modalities and
sources. In general, existing large-scale pre-training strate-
gies are mainly divided into three types: supervised learn-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15888
ing [20, 65] on pseudo-labeled data ( e.g., JFT-300M [85]),
weakly supervised learning [37,55] on web crawling images
text pairs ( e.g., LAION-400M [56]), and self-supervised
learning [5, 13, 27, 30, 91] on unlabeled images. Supported
by massive data, all these strategies have their own advan-
tages and have been proven to be effective for large models
of different tasks. In pursuit of stronger representations of
large models, some recent approaches [47, 78, 81] combine
the advantages of these strategies by directly using differ-
ent proxy tasks at different stages, significantly pushing the
performance boundaries of various vision tasks.
Nevertheless, the pipeline of these multi-stage pre-
training approaches is complex and fragile, which may lead
to uncertainty and catastrophic forgetting issues. Specif-
ically, the final performance is only available after com-
pleting the entire multi-stage pre-training pipeline. Due
to the lack of effective training monitors in the intermedi-
ate stages, it is difficult to locate the problematic training
stage when the final performance is poor. To eliminate this
dilemma, it is urgent to develop a single-stage pre-training
framework that can take advantage of various supervision
signals. It is natural to raise the following question: Is it
possible to design an all-in-one pre-training method to have
all the desired representational properties?
To this end, we first point out that different single-
stage pre-training methods share a unified design princi-
ple through a generic pre-training theoretical framework.
We further extend this framework to a multi-input multi-
target setting so that different pre-training methods can be
integrated systematically. In this way, we propose a novel
single-stage pre-training method, termed M3I Pre-training,
that all desired representational properties are combined in
a unified framework and trained together in a single stage.
Specifically, we first introduce a generic pre-training the-
oretical framework that can be instantiated to cover existing
mainstream pre-training methods. This framework aims to
maximize the mutual information between input represen-
tation and target representation, which can be further de-
rived into a prediction term with a regularization term. (1)
The prediction term reconstructs training targets from the
network inputs, which is equivalent to existing well-known
pre-training losses by choosing proper forms for the pre-
dicted distribution. (2) The regularization term requires the
distribution of the target to maintain high entropy to prevent
collapse, which is usually implemented implicitly through
negative samples or stop-gradient operation. As shown in
Fig. 1, by adopting different forms of input-target paired
data and their representations, our framework can include
existing pre-training approaches and provide possible direc-
tions to design an all-in-one pre-training method.
To meet the requirement of large-scale pre-training with
various data sources, we further extend our framework to
the multi-input multi-target setting, with which we showthat multi-task pre-training methods are optimizing a lower
bound of the mutual information. In addition, we mix
two masked views from two different images as the in-
put. The representation of one image is used to recon-
struct the same view, while the other image is used to re-
construct a different augmented view. Both representa-
tions will predict their corresponding annotated category or
paired texts. In this way, we propose a novel pre-training
approach, called M3I Pre-training, which can effectively
combine the merits of supervised/weakly-supervised/self-
supervised pre-training and enables large-scale vision foun-
dation models to benefit from multi-modal/source large-
scale data. Our contributions can be summarized as follows:
• We theoretically demonstrate all existing mainstream
pre-training methods share a common optimization ob-
jective, i.e.,maximizing the mutual information between
input and target representation. We also show how to in-
stantiate our framework as distinct pre-training methods.
• We propose a novel single-stage pre-training approach
called M3I Pre-training to gather the benefit of various
pre-training supervision signals, via extending our mu-
tual information pre-training framework to a multi-input
multi-target setting.
• Comprehensive experiments demonstrate the effective-
ness of our approach. We successfully pre-train
InternImage-H [79], a model with billion-level parame-
ters, and set a new record on basic detection and segmen-
tation tasks, i.e.,65.4 box AP on COCO test-dev [46],
62.9 mIoU on ADE20K [98].
|
Tseng_EDGE_Editable_Dance_Generation_From_Music_CVPR_2023 | Abstract
Dance is an important human art form, but creating
new dances can be difficult and time-consuming. In this
work, we introduce Editable Dance GEneration (EDGE), a
state-of-the-art method for editable dance generation that
is capable of creating realistic, physically-plausible dances
while remaining faithful to the input music. EDGE uses a
transformer-based diffusion model paired with Jukebox, a
strong music feature extractor, and confers powerful editing
capabilities well-suited to dance, including joint-wise con-
ditioning, and in-betweening. We introduce a new metric
for physical plausibility, and evaluate dance quality gener-
ated by our method extensively through (1) multiple quanti-
tative metrics on physical plausibility, beat alignment, and
diversity benchmarks, and more importantly, (2) a large-
scale user study, demonstrating a significant improvement
over previous state-of-the-art methods. Qualitative samples
from our model can be found at our website.
| 1. Introduction
Dance is an important part of many cultures around the
world: it is a form of expression, communication, and social
interaction [29]. However, creating new dances or dance an-
imations is uniquely difficult because dance movements are
expressive and freeform, yet precisely structured by music.
In practice, this requires tedious hand animation or motion
capture solutions, which can be expensive and impractical.On the other hand, using computational methods to gener-
ate dances automatically can alleviate the burden of the cre-
ation process, leading to many applications: such methods
can help animators create new dances or provide interac-
tive characters in video games or virtual reality with real-
istic and varied movements based on user-provided music.
In addition, dance generation can provide insights into the
relationship between music and movement, which is an im-
portant area of research in neuroscience [2].
Previous work has made significant progress using ma-
chine learning-based methods, but has achieved limited suc-
cess in generating dances from music that satisfy user con-
straints. Furthermore, the evaluation of generated dances is
subjective and complex, and existing papers often use quan-
titative metrics that we show to be flawed.
In this work, we propose Editable Dance GEneration
(EDGE), a state-of-the-art method for dance generation that
creates realistic, physically-plausible dance motions based
on input music. Our method uses a transformer-based dif-
fusion model paired with Jukebox, a strong music feature
extractor. This unique diffusion-based approach confers
powerful editing capabilities well-suited to dance, includ-
ing joint-wise conditioning and in-betweening. In addition
to the advantages immediately conferred by the modeling
choices, we observe flaws with previous metrics and pro-
pose a new metric that captures the physical accuracy of
ground contact behaviors without explicit physical model-
ing. In summary, our contributions are the following:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
448
1. We introduce EDGE, a diffusion-based approach for
dance generation that combines state-of-the-art perfor-
mance with powerful editing capabilities and is able
to generate arbitrarily long sequences. EDGE im-
proves on previous hand-crafted audio feature extrac-
tion strategies by leveraging music audio representa-
tions from Jukebox [5], a pre-trained generative model
for music that has previously demonstrated strong per-
formance on music-specific prediction tasks [3, 7].
2. We analyze the metrics proposed in previous works
and show that they do not accurately represent human-
evaluated quality as reported by a large user study.
3. We propose a new approach to eliminating foot-sliding
physical implausibilities in generated motions using a
novel Contact Consistency Loss, and introduce Phys-
ical Foot Contact Score, a simple new acceleration-
based quantitative metric for scoring physical plausi-
bility of generated kinematic motions that requires no
explicit physical modeling.
This work is best enjoyed when accompanied by our
demo samples. Please see the samples at our website.
|
Tang_DETR_With_Additional_Global_Aggregation_for_Cross-Domain_Weakly_Supervised_Object_CVPR_2023 | Abstract
This paper presents a DETR-based method for cross-
domain weakly supervised object detection (CDWSOD),
aiming at adapting the detector from source to target do-
main through weak supervision. We think DETR has strong
potential for CDWSOD due to an insight: the encoder and
the decoder in DETR are both based on the attention mech-
anism and are thus capable of aggregating semantics across
the entire image. The aggregation results, i.e., image-
level predictions, can naturally exploit the weak supervi-
sion for domain alignment. Such motivated, we propose
DETR with additional Global Aggregation (DETR-GA), a
CDWSOD detector that simultaneously makes “instance-
level + image-level” predictions and utilizes “strong +
weak“ supervisions. The key point of DETR-GA is very
simple: for the encoder / decoder, we respectively add mul-
tiple class queries / a foreground query to aggregate the
semantics into image-level predictions. Our query-based
aggregation has two advantages. First, in the encoder, the
weakly-supervised class queries are capable of roughly lo-
cating the corresponding positions and excluding the dis-
traction from non-relevant regions. Second, through our
design, the object queries and the foreground query in the
decoder share consensus on the class semantics, therefore
making the strong and weak supervision mutually benefit
each other for domain alignment. Extensive experiments on
four popular cross-domain benchmarks show that DETR-
GA significantly improves cross-domain detection accuracy
(e.g., 29.0% →79.4% mAP on PASCAL VOC →Clipart all
dataset) and advances the states of the art.
| 1. Introduction
The cross-domain problem is a critical challenge for ob-
ject detection in real-world applications. Concretely, there
is usually a domain gap between the training and testing
*Corresponding author: Si Liu
object queryfeaturetokenclass queryforeground querypersondog
correlatedtransformerencodertransformerdecoder
Figure 1. To exploit the weak supervision, DETR-GA aggregates
the semantic information across the entire image into image-level
predictions. Specifically, DETR-GA adds multiple class queries /
a foreground query into the transformer encoder / decoder, respec-
tively. The foreground query is correlated with the object queries
but has no position embedding. We visualize the attention score
of some queries, e.g., “person“ and “dog”. Despite no position su-
pervision, each class query / the foreground query attends to the
class-specific / foreground regions for semantic aggregation.
data. This domain gap significantly compromises the detec-
tion accuracy when the detector trained on the source do-
main is directly deployed on a novel target domain. To mit-
igate the domain gap, existing domain adaptation methods
can be categorized into supervised, unsupervised [7,10,39],
and weakly supervised approaches [17, 21, 33, 58]. Among
the three approaches, we are particularly interested in the
weakly supervised one because it requires only image-level
annotations and achieves a good trade-off between the adap-
tation effect and the annotation cost. Therefore, this paper
challenges the cross-domain weakly supervised object de-
tection (CDWSOD), aiming at adapting the detector from
the source to target domain through weak supervision.
We think the DETR-style detector [3,27,67] has high po-
tential for solving CDWSOD. In contrast to current CDW-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11422
SOD methods dominated by pure convolutional neural net-
work detectors (“CNN detectors”), this paper is the first to
explore DETR-style detectors for CDWSOD, to the best of
our knowledge. Our optimism for DETR is NOT due to its
prevalence or competitive results in generic object detec-
tion. In fact, we empirically find the DETR-style detector
barely achieves any superiority against CNN detectors for
direct cross-domain deployment (Section 4.4). Instead, our
motivation is based on the insight, i.e., the DETR-style de-
tector has superiority for combining the strong and weak
supervision, which is critical for CDWSOD [17, 21, 58].
Generally, CDWSOD requires using weak ( i.e., image-
level) supervision on target domain to transfer the knowl-
edge from source domain. Therefore, it is essential to sup-
plement the detector with image-level prediction capability.
We argue that this essential can be well accommodated by
two basic components in DETR, i.e., the encoder and the
decoder. Both the encoder and the decoder are based on
the attention mechanism and thus have strong capability to
capture long-range dependencies. This long-range model-
ing capability, by its nature, is favorable for aggregating se-
mantic information to make image-level predictions.
To fully exploit the weak supervision in CDWSOD, this
paper proposes DETR with additional Global Aggregation
(DETR-GA). DETR-GA adds attention-based global aggre-
gation into DETR so as to make image-level predictions,
while simultaneously preserving the original instance-level
predictions. Basically, DETR uses multiple object queries
in the decoder to probe local regions and gives instance-
level predictions. Based on DETR, DETR-GA makes two
simple and important changes: for the encoder / decoder,
it respectively adds multiple class queries / a foreground
query to aggregate semantic information across the entire
image. The details are explained below:
1) The encoder makes image-level prediction through
a novel class query mechanism . Specifically, the encoder
adds multiple class queries into its input layer, with each
query responsible for an individual class. Each class query
probes the entire image to aggregate class-specific informa-
tion and predicts whether the corresponding class exists in
the image. During training, we use the image-level multi-
class label to supervise the class query predictions.
Despite NO position supervision, we show these class
queries are capable to roughly locate the corresponding po-
sition (Fig. 1) and thus exclude the distraction from non-
relevant regions. Therefore, our class query mechanism
achieves better image-level aggregation effect than the av-
erage pooling strategy that is commonly adopted in pure-
CNN CDWSOD methods. Empirically, we find this simple
component alone brings significant improvement for CDW-
SOD, e.g., +20.8 mAP on PASCAL VOC →Clipart test.
2) The decoder gives image-level and instance-level pre-
dictions simultaneously through correlated object and fore-ground queries . To this end, we simply remove the position
embedding from an object query and use the remained con-
tent embedding as the foreground query. The insight for
this design is: in a object query, the position embedding en-
courages focus on local region [27,31,32], while the content
embedding is prone to global responses to all potential fore-
ground regions. Therefore, when we remove the position
embedding, an object query discards the position bias and
becomes a foreground query with global responses (as visu-
alized in Fig. 1). Except this difference ( i.e., with or without
position embedding), the object queries and the foreground
query share all the other elements in the decoder, e.g., the
self-attention layer, cross-attention layer. Such correlation
encourages them to share consensus on the class semantic
and thus benefits the domain alignment along all the classes.
Overall, DETR-GA utilizes the weak supervision on the
encoder and decoder to transfer the detection capability
from source to target domain. Experimental results show
that DETR-GA improves cross-domain detection accuracy
by a large margin. Our main contributions can be summa-
rized as follows:
•As the first work to explore DETR-style detector for
CDWSOD, this paper reveals that DETR has strong poten-
tial for weakly-supervised domain adaptation because its
attention mechanism can fully exploit image-level supervi-
sion by aggregating semantics across the entire image.
•We propose DETR-GA, a CDWSOD detector that si-
multaneously makes “instance-level + image-level” predic-
tions and can utilize both “strong + weak” supervision. The
key point of DETR-GA is the newly-added class queries /
foreground query in the encoder / decoder, which promotes
global aggregation for image-level prediction.
•Extensive experiments on four popular cross-domain
benchmarks show that DETR-GA significantly improves
CDWSOD accuracy and advances the states of the art. For
example, on PASCAL VOC →Clipart all, DETR-GA im-
proves the baseline from 29.0% to 79.4% .
|
Tejankar_Defending_Against_Patch-Based_Backdoor_Attacks_on_Self-Supervised_Learning_CVPR_2023 | Abstract
Recently, self-supervised learning (SSL) was shown to
be vulnerable to patch-based data poisoning backdoor at-
tacks. It was shown that an adversary can poison a small
part of the unlabeled data so that when a victim trains
an SSL model on it, the final model will have a back-
door that the adversary can exploit. This work aims to
defend self-supervised learning against such attacks. We
use a three-step defense pipeline, where we first train a
model on the poisoned data. In the second step, our pro-
posed defense algorithm (PatchSearch) uses the trained
model to search the training data for poisoned samples
and removes them from the training set. In the third step,
a final model is trained on the cleaned-up training set.
Our results show that PatchSearch is an effective defense.
As an example, it improves a model’s accuracy on im-
ages containing the trigger from 38.2% to 63.7% which is
very close to the clean model’s accuracy, 64.6%. More-
over, we show that PatchSearch outperforms baselines and
state-of-the-art defense approaches including those using
additional clean, trusted data. Our code is available at
https://github.com/UCDvision/PatchSearch
| 1. Introduction
Self-supervised learning (SSL) promises to free deep
learning models from the constraints of human supervision.
It allows models to be trained on cheap and abundantly
available unlabeled, uncurated data [22]. A model trained
this way can then be used as a general feature extractor for
various downstream tasks with a small amount of labeled
data. However, recent works [8, 44] have shown that un-
curated data collection pipelines are vulnerable to data poi-
soning backdoor attacks.
Data poisoning backdoor attacks on self-supervised
learning (SSL) [44] work as follows. An attacker intro-
duces “backdoors” in a model by simply injecting a few
carefully crafted samples called “poisons” in the unlabeled
*Corresponding author <[email protected]> . Work
done while interning at Meta AI.training dataset. Poisoning is done by pasting an attacker
chosen “trigger” patch on a few images of a “target” cat-
egory. A model pre-trained with such a poisoned data be-
haves similar to a non-poisoned/clean model in all cases ex-
cept when it is shown an image with a trigger. Then, the
model will cause a downstream classifier trained on top of
it to mis-classify the image as the target category. This types
of attacks are sneaky and hard to detect since the trigger is
like an attacker’s secret key that unlocks a failure mode of
the model. Our goal in this work is to defend SSL models
against such attacks.
While there have been many works for defending su-
pervised learning against backdoor attacks [34], many of
them directly rely upon the availability of labels. There-
fore, such methods are hard to adopt in SSL where there are
no labels. However, a few supervised defenses [5, 28] can
be applied to SSL with some modifications. One such de-
fense, proposed in [28], shows the effectiveness of applying
a strong augmentation like CutMix [55]. Hence, we adopt
i-CutMix [33], CutMix modified for SSL, as a baseline. We
show that i-CutMix is indeed an effective defense that addi-
tionally improves the overall performance of SSL models.
Another defense that does not require labels was pro-
posed in [44]. While this defense successfully mitigates the
attack, its success is dependent on a significant amount of
clean, trusted data. However, access to such data may not
be practical in many cases. Even if available, the trusted
data may have its own set of biases depending on its sources
which might bias the defended model. Hence, our goal is to
design a defense that does not need any trusted data.
In this paper, we propose PatchSearch, which aims at
identifying poisoned samples in the training set without ac-
cess to trusted data or image labels. One way to identify a
poisoned sample is to check if it contains a patch that be-
haves similar to the trigger. A characteristic of a trigger is
that it occupies a relatively small area in an image ( ≈5%)
but heavily influences a model’s output. Hence, we can use
an input explanation method like Grad-CAM [45] to high-
light the trigger. Note that this idea has been explored in
supervised defenses [14, 16]. However, Grad-CAM relies
on a supervised classifier which is unavailable in our case.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12239
𝑔
𝑐!𝑘-means𝑋
𝑐"𝑐#𝑐$𝑐"
candidate trigger 𝑡%c. calculate poison score of 𝒙𝒊
𝑐!𝑐#𝑐$poison score = 2flip test setoriginalassignmentsflip test set +candidate triggernewassignments
originalflippedflipped𝑐"𝑐"𝑐$𝑋&⊕𝑡%𝑋&d. iterative searchrandomly sample 𝒔images from 𝒄𝒊calculatepoisonscoresofsscore 𝒄𝒊as max of all scores in itprune 𝒓fractionfrom 𝑪withleastscorespick 𝒄𝒊from𝑪increment𝒊from 𝟏→|𝑪|rank all images and pick top-𝑘if 𝑪empty𝑪clusters
Grad-CAMextractmax 𝑤 ×𝑤𝑥%𝑔
b. get candidate trigger 𝒕𝒊 from 𝒙𝒊assigncluster
𝑋
𝑋(predict 0
top-𝑘 poisonsℎ
predict 1ℎ𝑋⊕e. poison classifiera. assign clustersFigure 1. Illustration of PatchSearch. SSL has been shown to group visually similar images in [4, 47], and [44] showed that poisoned
images are close to each other in the representation space. Hence, the very first step (a) is to cluster the training dataset. We use clustering to
locate the trigger and to efficiently search for poisoned samples. The second step locates the candidate trigger in an image with Grad-CAM
(b) and assigns a poison score to it (c). The third step (d) searches for highly poisonous clusters and only scores images in them. This step
outputs a few highly poisonous images which are used in the fourth and final step (e) to train a more accurate poison classifier.
This is a key problem that we solve with k-means cluster-
ing. An SSL model is first trained on the poisoned data and
its representations are used to cluster the training set. SSL
has been shown to group visually similar images in [4, 47],
and [44] showed that poisoned images are close to each
other in the representation space. Hence, we expect poi-
soned images to be grouped together and the trigger to be
the cause of this grouping. Therefore, cluster centers pro-
duced by k-means can be used as classifier weights in place
of a supervised classifier in Grad-CAM. Finally, once we
have located a salient patch, we can quantify how much it
behaves like a trigger by pasting it on a few random images,
and counting how many of them get assigned to the cluster
of the patch (Figure 1 (c)). A similar idea has been explored
in [14], but unlike ours, it requires access to trusted data, it
is a supervised defense, and it operates during test time.
One can use the above process of assigning poison scores
to rank the entire training set and then treat the top ranked
images as poisonous. However, our experiments show that
in some cases, this poison detection system has very low
precision at high recalls. Further, processing all images is
redundant since only a few samples are actually poisonous.
For instance, there can be as few as 650 poisons in a dataset
of 127K samples ( ≈0.5%) in our experiments. Hence, we
design an iterative search process that focuses on findinghighly poisonous clusters and only scores images in them
(Figure 1 (d)). The results of iterative search are a few but
highly poisonous samples. These are then used in the next
step to build a poison classifier which can identify poisons
more accurately. This results in a poison detection system
with about 55% precision and about 98% recall in average.
In summary, we propose PatchSearch, a novel de-
fense algorithm that defends self-supervised learning mod-
els against patch based data poisoning backdoor attacks by
efficiently identifying and filtering out poisoned samples.
We also propose to apply i-CutMix [33] augmentation as a
simple but effective defense. Our results indicate that Patch-
Search is better than both the trusted data based defense
from [44] and i-CutMix. Further, we show that i-CutMix
and PatchSearch are complementary to each other and com-
bining them results in a model with an improved overall
performance while significantly mitigating the attack.
|
Tan_Learning_on_Gradients_Generalized_Artifacts_Representation_for_GAN-Generated_Images_Detection_CVPR_2023 | Abstract
Recently, there has been a significant advancement in
image generation technology, known as GAN. It can easily
generate realistic fake images, leading to an increased risk
of abuse. However, most image detectors suffer from sharp
performance drops in unseen domains. The key of fake im-
age detection is to develop a generalized representation to
describe the artifacts produced by generation models. In
this work, we introduce a novel detection framework, named
Learning on Gradients (LGrad), designed for identifying
GAN-generated images, with the aim of constructing a gen-
eralized detector with cross-model and cross-data. Specif-
ically, a pretrained CNN model is employed as a trans-
formation model to convert images into gradients. Sub-
sequently, we leverage these gradients to present the gen-
eralized artifacts, which are fed into the classifier to as-
certain the authenticity of the images. In our framework,
we turn the data-dependent problem into a transformation-
model-dependent problem. To the best of our knowledge,
this is the first study to utilize gradients as the representa-
tion of artifacts in GAN-generated images. Extensive ex-
periments demonstrate the effectiveness and robustness of
gradients as generalized artifact representations. Our de-
tector achieves a new state-of-the-art performance with a
remarkable gain of 11.4%. The code is released at https:
//github.com/chuangchuangtan/LGrad .
| 1. Introduction
Over the past years, remarkable progress has been made
in deep generative models, i.e.generative adversarial net-
works (GAN) [14], its variations [3, 21–23, 36], and V AE
[25]. The generated media is highly realistic and indistin-
*Corresponding author
StyleGAN CelebaHQ
(a) Images (b)Gradients (c)Grad-R (d)Grad-G (e)Grad-B
StyleGAN2
ProGANFigure 1. Visualization of gradients of real images and GAN-
generated images extracted from a pre-trained model. To fully
understand the gradients, heatmaps for R, G, and B channels also
are shown, where red is high. In the gradients, the content of im-
ages is filtered out, and only the discriminative pixels are retained
for the pre-trained model’s target task. We utilize the gradients as
the generalized artifacts representation to develop a novel detec-
tion framework.
guishable from real to human eyes. Although it has the po-
tential for many novel applications [8], it also brings new
risks due to the abuse of fake information. The misuse
of DeepFakes has been confirmed that some people swap
the faces of women onto pornographic videos [8]. In addi-
tion, some individuals, even non-experts, can malevolently
manipulate or create fake images or videos for political or
economic purposes, leading to serious social problems [17].
Thus, it is extremely necessary to develop forgery detection
techniques to help people determine the credibility of the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12105
media [17, 40].
Various detectors [11–13,16,18,19,30,42,45] have been
developed to detect GAN-generated images. Some stud-
ies [16, 30, 45] focus on human face images, while oth-
ers [11,18,19,42] handle various categories of images. They
mainly depend on local regions artifacts [4, 44], blending
boundary [26], global textures [30], and frequency-level ar-
tifacts [13, 18, 19, 45]. However, those methods heavily
rely on the training settings, resulting in failure detection
of images from unseen categories or GAN models. The
test images in the actual scene are always from unknown
sources [17], rendering it challenging to develop gener-
alized detectors. There are some works [19, 42] exploit-
ing pre-processing, data augmentation, and reducing the ef-
fects of frequency-level artifacts to develop a robust detec-
tor. Nevertheless, there still needs to be a more generalized
representation of the clue produced by generation models,
which is critical for robust fake image detection.
To tackle this problem, we propose a novel and simple
detection framework, referred to as Learning on Gradients
(LGrad). A new generalized feature, Gradients, is devel-
oped to serve as a representation of the artifacts produced
by GAN models. We believe that the gradients of a trained
CNN model can highlight the important pixels in the tar-
get task, thereby serving as a valuable cue for detecting
fake images. As shown in Figure 1, we adopt a pre-trained
discriminator of ProGAN [21] to extract gradients of im-
ages produced by Celeba-HQ [21], ProGAN [21], Style-
GAN [22], StyleGAN2 [23]. In these gradients, the content
of images is filtered out, and only the discriminative pixels
that are relevant to the pre-trained model’s target task are re-
tained. Therefore, the gradients are more dependent on the
pre-trained model rather than on training sources, thereby
enhancing the detector’s performance with unseen data. In
our framework, a pretrained model, called transformation
model , is employed to convert images to gradients. These
gradients serve as the generalized artifacts and are fed into
the classifier to obtain a robust detector. Since the transfor-
mation model is indeterminate in our framework, targeted
anti-detection cannot be effectively launched.
To validate the performance of our LGrad, we only use
images generated by ProGAN to train the detector and eval-
uate it with various sources, including cross-category, cross-
model, and cross-model & category. Numerous experi-
ments prove the effectiveness and robustness of gradients
as generalized artifacts. Our detector achieves a new state-
of-the-art performance in known and unseen settings.
Our paper makes the following contributions:
• We develop a new detection framework, Learning on
Gradient (LGrad), to detect GAN-generated images.
Our detector achieves a new state-of-the-art perfor-
mance.• We introduce a new generalized artifact representation,
Gradients, for GAN-generated image detection. Fur-
thermore, we are the first to use gradients as the repre-
sentation of artifacts.
• Our framework turns the data-driven problem into a
transformation-model-driven problem. The robustness
of the detector is improved with the introduction of the
transformation model.
• We prove the great potential of the discriminator of
GAN in detecting GAN-generated images.
|
Tang_HumanBench_Towards_General_Human-Centric_Perception_With_Projector_Assisted_Pretraining_CVPR_2023 | Abstract
Human-centric perceptions include a variety of vision
tasks, which have widespread industrial applications, in-
cluding surveillance, autonomous driving, and the meta-
verse. It is desirable to have a general pretrain model
for versatile human-centric downstream tasks. This pa-
per forges ahead along this path from the aspects of both
benchmark and pretraining methods. Specifically, we pro-
pose a HumanBench based on existing datasets to com-
prehensively evaluate on the common ground the gener-
alization abilities of different pretraining methods on 19
datasets from 6 diverse downstream tasks, including person
ReID, pose estimation, human parsing, pedestrian attribute
recognition, pedestrian detection, and crowd counting. To
learn both coarse-grained and fine-grained knowledge in
human bodies, we further propose a Projector AssisTed
Hierarchical pretraining method ( PATH ) to learn diverse
knowledge at different granularity levels. Comprehensive
evaluations on HumanBench show that our PATH achieves
new state-of-the-art results on 17 downstream datasets and
on-par results on the other 2 datasets. The code will be
publicly at https://github.com/OpenGVLab/HumanBench.
| 1. Introduction
Human-centric perception has been a long-standing pur-
suit for computer vision and machine learning communi-
ties. It encompasses massive research tasks and applica-
tions including person ReID in surveillance [16, 17, 60,
96, 112], human parsing and pose estimation in the meta-
verse [47, 48, 61, 79, 90, 92], and pedestrian detection in au-
tonomous driving [8, 51, 87]. Although significant progress
has been made, most existing human-centric studies and
pipelines are task-specific for better performances, leading
*Equal contribution. This work was done in SenseTime.
†Corresponding author.to huge costs in representation/network design, pretraining,
parameter-tuning, and annotations. To promote real-world
deployment, we ask: whether a general human-centric pre-
training model can be developed that can benefit diverse
human-centric tasks and be efficiently adapted to down-
stream tasks?
Intuitively, we argue that pretraining such general
human-centric models is possible for two reasons. First,
there are obvious correlations among different human-
centric tasks. For example, both human parsing and pose es-
timation predict the fine-grained parts of human bodies [29,
49] with differences in annotation granularities. Thus, the
annotations in one human-centric task may benefit other
human-centric tasks when trained together. Second, recent
achievements in foundation models [5, 11, 38, 65, 66, 80]
have shown that large-scale deep neural networks ( e.g.,
transformers [13]) have the flexibility to handle diverse in-
put modalities and the capacity to deal with different tasks.
For example, Uni-Percevier [115] and BEITv3 [85] are ap-
plicable to multiple vision and language tasks.
Despite the opportunities of processing multiple human-
centric tasks with one pretraining model, there are two ob-
stacles for developing general human-centric pretraining
models. First, although there are many benchmarks for ev-
ery single human-centric task, there is still no benchmark
to fairly and comprehensively compare various pretraining
methods on a common ground for a broad range of human-
centric tasks, data distributions, and application scenarios.
Second, different from most existing general foundation
models trained by unified global vision-language consis-
tencies, pretraining human-centric models are required to
learn both global ( e.g., person ReID and pedestrian detec-
tion) and fine-grained semantic features ( e.g., pose estima-
tion and human parsing) of human bodies from diverse an-
notation granularity simultaneously.
In this paper, we first build a benchmark, called Hu-
manBench , based on existing datasets to enable pretrain-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21970
MAECLIPSoTAOurs(a) Diversity of ImagesDay
(b) Comprehensiveness of Evaluation
Scene imagesPerson-centric images
NightCrowdOutdoorIndoor
Detection
Parsing
Pose
Attribute
reIDCounting(c) High Performance by our pretraining method
Figure 1. (a-b) Overview of our proposed HumanBench. HumanBench includes diverse images, including scene images and person-centric
images. Our HumanBench also has comprehensive evaluation. Specifically, it evaluates pretraining models on 6 tasks, including pedestrian
detection, human parsing, pose estimation, pedestrian attribute recognition, person ReID, and crowd counting. (c) High performances are
achieved by our pretraining method on HumanBench. We report 1-heavy occluded MR−2and 1-EPE for Caltech and H3.6pose.
ing and evaluating human-centric representations that can
be generalized to various downstream tasks. HumanBench
has two appealing properties. (1) Diversity. The images in
our HumanBench include diverse image properties, rang-
ing from person-centric cropped images to scene images
with crowd pedestrians, ranging from indoor scenes to out-
door scenes (Fig. 1(a)), and from surveillance to metaverse.
(2) Comprehensiveness. Humanbench covers comprehen-
sive image-based human-centric tasks in both pretraining
datasets and downstream tasks (Fig. 1(b)). For pretrain-
ing, we include 11 million images from 37 datasets across
five representative human-centric tasks, i.e.,person ReID,
pose estimation, human parsing, pedestrian attribute recog-
nition, and pedestrian detection. For evaluation, Human-
Bench evaluates the generalization abilities on 12 pretrain-
ing datasets, 6 unseen datasets of pretraining tasks, and 2
datasets out of pretraining tasks, ranging from global pre-
diction, i.e., ReID, to local prediction, i.e., human pars-
ing and pose estimation. Results on our HumanBench
(Fig. 1(c)) lead to two interesting findings. First, compared
with datasets with natural images for general pretrained
models, HumanBench is more effective for human-centric
perception tasks. Second, as human-centric pretraining re-
quires to learn features of diverse granularity, supervised
pretraining methods with proper designs can learn from di-
verse annotations in HumanBench and perform better than
the existing unsupervised pretraining methods, for which
details will be shown in Sec. 5.3.
Based on HumanBench, we further investigate how to
learn a better human-centric supervised pretraining model
from diverse datasets with various annotations. How-
ever, naive multitask pretraining may easily suffer from the
task conflicts [53, 97] or overfitting to pretrained annota-
tions [67, 107], losing the desirable generalization ability
of pretraining. Inspired by [86], which suggests adding anMLP projector before the task head can significantly en-
hance the generalization ability of supervised pretraining,
we propose Projector AssisTedHierarchical Pre-training
(PATH ), a projector assisted pretraining method with hi-
erarchical weight sharing to tackle the task conflicts of su-
pervised pretraining from diverse annotations. Specifically,
the weights of backbones are shared among all datasets, and
the weights of projectors are shared only for datasets of the
same tasks, while the weights of the heads are shared only
for a single dataset – forming a hierarchical weight-sharing
structure. During the pretraining stage, we insert the task-
specific projectors before dataset heads but discard them
when evaluating models on downstream tasks. With the hi-
erarchical weight-sharing strategy, our pretraining method
enforces the backbone to learn the shared knowledge pool,
the projector to attend to the task-specific knowledge, and
the head to focus on the dataset with specific annotation and
data distribution.
In summary, our contributions are two folds: (1) we
build HumanBench, a large-scale dataset for human-centric
pretraining including diverse images and comprehensive
evaluations. (2) To tackle the diversity of input images
and annotations of various human-centric datasets, we pro-
pose PATH, a projector-assisted hierarchical weight-sharing
method for pretraining the general human-centric represen-
tations. We achieve state-of-the-art results by PATH on
15 datasets throughout 6 downstream tasks (Fig. 1(c)), on-
par results on 2 datasets, and slightly lower results on 2
datasets on HumanBench when using ViT-Base. Experi-
ments with ViT-Large backbone show that our method can
further achieve considerable gains over ViT-Base, achiev-
ing another 2 new state-of-the-art results and showing the
promising scalability of our method. We hope our work can
shed light on future research on pretraining human-centric
representations, such as unified structures.
21971
Table 1. Statistics of Pretraining Datasets
Task Number of datasets Number of Images
Person ReID 7 5,446,419
Pose estimation 11 3,739,291
Human parsing 7 1,419,910
Pedestrian Attribute 6 242,880
Pedestrian Detection 6 170,687
In total 37 11,019,187
|
Song_Efficient_Hierarchical_Entropy_Model_for_Learned_Point_Cloud_Compression_CVPR_2023 | Abstract
Learning an accurate entropy model is a fundamental
way to remove the redundancy in point cloud compression.
Recently, the octree-based auto-regressive entropy model
which adopts the self-attention mechanism to explore de-
pendencies in a large-scale context is proved to be promis-
ing. However, heavy global attention computations and
auto-regressive contexts are inefficient for practical appli-
cations. To improve the efficiency of the attention model,
we propose a hierarchical attention structure that has a lin-
ear complexity to the context scale and maintains the global
receptive field. Furthermore, we present a grouped context
structure to address the serial decoding issue caused by the
auto-regression while preserving the compression perfor-
mance. Experiments demonstrate that the proposed entropy
model achieves superior rate-distortion performance and
significant decoding latency reduction compared with the
state-of-the-art large-scale auto-regressive entropy model.
| 1. Introduction
Point cloud is a fundamental data structure to represent
3D scenes. It has been widely applied in 3D vision systems
such as autonomous driving and immersive applications.
The large-scale point cloud typically contains millions of
points [36]. It is challenging to store and transmit such
massive data. Hence, efficient point cloud compression that
reduces memory footprints and transmission bandwidth is
necessary to develop practical point cloud applications.
Recently, deep learning methods have promoted the de-
velopment of point cloud compression [4, 9, 10, 16, 31, 36,
40,43]. It is a common pipeline to learn an octree-based en-
tropy model to estimate octree node symbol ( i.e., occupancy
symbol) distributions. The point cloud is first organized as
an octree, and occupancy symbols are then encoded into
the bitstream losslessly by an entropy coder ( e.g., arithmetic
*Corresponding author.
0.1 1 10 100 1000
Decoding Latency (s)-10-5051015202530354045Bitrate Reduction over G-PCC (%)EHEM
Light EHEM
OctAttentionSparsePCGC
OctSqueezeG-PCCFigure 1. Bitrate and decoding speed in the log scale for lossless
compression on 16-bit quantized SemanticKITTI. The proposed
method EHEM yields state-of-the-art performance with a compa-
rable decoding latency to the efficient traditional method G-PCC.
coder [47]). An accurate entropy model is required since it
reduces the cross entropy between the estimated distribution
and ground truth, which is corresponding to actual bitrates.
Various attempts have been made to improve the accuracy
by designing different context structures [4,10,16,37]. The
key of these advances is to increase the context capacity and
introduce references from high-resolution octree represen-
tations. For example, the context in OctAttention [10] in-
cludes hundreds of previously decoded siblings ( i.e., nodes
at the same octree level). The large-scale context incorpo-
rates more references for the entropy coder, and the high-
resolution context preserves detailed features of the point
cloud. Both of them contribute to building informative con-
texts and effective entropy models.
However, large-scale context requires heavy computa-
tions to model dependencies among numerous references.
The previous work [10] uses the global self-attention mech-
anism to model long-range dependencies within the con-
text, and its complexity is quadratic to the length of the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14368
large-scale context. Furthermore, considering the efficiency
issue, it is infeasible to build a deeper entropy model or
extend the context scale to enhance the modeling capabil-
ity based on global attention. Another concern is the serial
decoding process caused by the inclusion of previously de-
coded siblings. This auto-regressive context structure incurs
a practically unacceptable decoding latency ( e.g., around
700 seconds per frame).
To address these issues, we build an entropy model
with an efficient hierarchical attention model and a parallel-
friendly grouped context structure. The hierarchical atten-
tion model partitions the context into local windows and
computes attention within these windows independently.
Therefore, the complexity is linear to the context scale,
which allows the further extension of network depth and
context capacity to improve performance. Since the recep-
tive field of the localized attention is limited, we adopt a
multi-scale network structure to query features across dif-
ferent windows. The context is progressively downsam-
pled by merging neighboring nodes to generate a new to-
ken. Then, cross-window dependencies can be captured by
incorporating these new tokens in the same window. The
grouped context divides the occupancy symbol sequence
into two groups. Each group is conditioned on ancestral
features and previously decoded groups, and hence nodes
in the same group can be coded in parallel. Furthermore, in
contrast to the previous auto-regressive context that only ex-
ploits causal parts of the ancestral context [10], the grouped
context allows to make use of a complete ancestral context.
The proposed efficient hierarchical entropy model called
EHEM is evaluated on SemanticKITTI [3] and Ford [32]
benchmarks. It surpasses state-of-the-art methods in terms
of both compression performance and efficiency. Contribu-
tions of this work can be summarized from the following
perspectives:
• We propose a hierarchical attention model, which
yields improved compression performance by extend-
ing model and context while keeping the efficiency.
• We design a grouped context structure that enables par-
allel decoding. It adapts the entropy model using high-
resolution references to practical applications.
• The proposed method achieves state-of-the-art RD per-
formance with a practically applicable coding speed.
|
Tu_Learning_With_Noisy_Labels_via_Self-Supervised_Adversarial_Noisy_Masking_CVPR_2023 | Abstract
Collecting large-scale datasets is crucial for training
deep models, annotating the data, however, inevitably yields
noisy labels, which poses challenges to deep learning algo-
rithms. Previous efforts tend to mitigate this problem via
identifying and removing noisy samples or correcting their
labels according to the statistical properties (e.g., loss val-
ues) among training samples. In this paper, we aim to tackle
this problem from a new perspective, delving into the deep
feature maps, we empirically find that models trained with
clean and mislabeled samples manifest distinguishable acti-
vation feature distributions. From this observation, a novel
robust training approach termed adversarial noisy mask-
ing is proposed. The idea is to regularize deep features
with a label quality guided masking scheme, which adap-
tively modulates the input data and label simultaneously,
preventing the model to overfit noisy samples. Further, an
auxiliary task is designed to reconstruct input data, it nat-
urally provides noise-free self-supervised signals to rein-
force the generalization ability of models. The proposed
method is simple yet effective, it is tested on synthetic and
real-world noisy datasets, where significant improvements
are obtained over previous methods. Code is available at
https://github.com/yuanpengtu/SANM .
| 1. Introduction
Deep learning has achieved remarkable success, relying
on large-scale datasets with human-annotated accurate la-
bels. However, collecting such high-quality labels is time-
consuming and expensive. As an alternative, inexpensive
strategies are usually used for generating labels for large-
scale samples, such as web crawling, leveraging search en-
gines, or using machine-generated annotations. All these
* Equal contribution.
†Corresponding author.
(a)Clean-trained
Mis-predicted(b) Noisy-trained
Mis-predicted(c) Noisy-trained
Correctly-predicted
Figure 1. Activation maps of mis-predicted (a-b) and correctly-
predicted (c) samples when training PreAct ResNet-18 with clean
(i.e., clean-trained) and noisy (i.e., noisy-trained) data on CIFAR-
10 (1st and 2nd row) and Clothing1M [38] (3rd row).
alternative methods inevitably yield numerous noisy sam-
ples. However, previous research [2] has revealed that deep
networks can easily overfit to noisy labels and suffer from
dramatic degradation in the generalization performance.
Towards this problem, numerous Learning with Noisy
labels (LNL) approaches have been proposed. Sample se-
lection methods [3, 8, 34, 41] are the most straightforward
methods, which attempt to select clean samples based on
certain criterion (e.g., loss value) and then reweight or dis-
card the noisy instances to reduce the interference of mis-
labeled training samples. However, these methods fail to
leverage the potential information of the discarded samples.
Similarly, label correction based methods [32, 40, 48] try
to correct labels, which often impose assumptions on the
existence of a small correctly-labeled validation subset or
directly utilize predictions of deep models. However, such
assumptions can not be always full-filled in real-world noisy
datasets [27], leading to limited application scenarios. Be-
sides, the predictions of deep model tend to fluctuate when
training with mislabeled samples, making the label correc-
tion process unstable and sub-optimal [36]. On the contrary,
regularization based methods [2, 6, 15, 20, 26, 29, 43, 47, 50]
aim to alleviate the side effect of label noise by prevent-
ing the deep models from overfitting to all training sam-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16186
ples, which is flexible and can work collaboratively with
other LNL methods. Both explicit and implicit regulariza-
tion techniques were proposed, the former generally cali-
brates the parameter update of networks by modifying the
expected training loss, i,e., dropout. The latter focuses on
improving the generalization by utilizing the stochasticity,
i,e., data augmentation strategy and stochastic gradient de-
scent [29]. However, most of these methods are designed
for general fully-supervised learning tasks with correctly-
labeled samples, and poor generalization ability could be
obtained when the label noise is severe [29].
In this paper, we present a novel regularization based
method specialized for LNL problem. Our method is in-
spired by an observation that some differences exist in
activation maps between the predictions of deep models
trained with clean and noisy labels. As shown in Fig. 1 (a),
activation maps of mis-predicted samples by the model
trained with clean labels are always focused on their fore-
ground areas. By contrast, when the model is trained with
noisy labels, it tends to generates results focused on the
meaningless background area for the mis-predicted sam-
ples (Fig. 1 (b)). And even for cases that the prediction
is correct, i.e., Fig. 1 (c), the high-activated region drifts
to irrelevant (i.e., edge) areas of the object, rather than the
main regions of the object. This indicates that model trained
with mislabeled samples is likely to overfit to some corner
parts of the object from noisy data or remember the less-
informative regions (i.e., background) . Therefore, from this
observation, we propose a novel Self-supervised Adversar-
ial Noisy Masking (SANM) method for LNL. The idea is
to explicitly regularize the model to generate more diverse
activation maps through masking the certain region (e.g.,
max-activated) of the input image and thus alleviates the
confirmation bias [8].
Moreover, to investigate how the masking strategies af-
fect the performance of training deep models with noisy
labels, we conduct a simple experiment, where differ-
ent adversarial masking strategies (i.e., masking the max-
activated region of input images with fixed, random, and
noise-aware mask ratio) were applied to the popular LNL
framework DivideMix [15]. As shown in Fig. 2, we find that
the performance gain is much more sensitive to the masking
strategy. A fixed or random mask ratio obtain only marginal
gain, but when dealing with the correctly-labeled and misla-
beled samples with different mask ratios, a significant per-
formance gain is achieved. This indicates that how to design
an effective adversarial masking method in the case of LNL
is non-trivial and not well addressed.
Towards the problem above, a label quality guided adap-
tive masking strategy is proposed, which modulates the im-
age label and the mask ratio of image masking simultane-
ously. The label is updated via first leveraging the soft-
distributed model prediction and then reducing the proba-bility of max-activated class with the noise-aware masking
ratio, while at the same time lifting the probability of other
classes. Intuitively, a sample with a high probability of be-
ing mislabeled will possess a larger masking ratio, leading
to a more strictly regularized input image and modulated
label. Therefore, the negative impact of noisy labels can
be greatly reduced. As for the correctly-labeled samples,
the masking ratio is relatively small, which plays the role of
general regularization strategy and improves the model gen-
eralization ability by preventing the model from overfitting
training samples. Further, we specially customized an aux-
iliary decode branch to reconstruct the original input image,
which provides noise-free self-supervised information for
learning a robust deep model. The proposed SANM is flex-
ible and can further boost existing LNL methods. It is tested
on synthetic and real-world large-scale noisy datasets, and
elaborately ablative experiments were conducted to verify
each design component of the proposed method. In a nut-
shell, the key contributions of this work are:
•We propose a novel self-supervised adversarial noisy
masking method named SANM to explicitly impose reg-
ularization for LNL problem, preventing the model from
overfitting to some corner parts of the object or less-
informative regions from noisy data;
•A label quality guided masking strategy is proposed
to differently adjust the process for clean and noisy sam-
ples according to the label quality estimation. This strategy
modulates the image label and the ratio of image masking
simultaneously;
•A self-supervised mask reconstruction auxiliary task
is designed to reconstruct the original images based on the
features of masked ones, which aims at enhancing general-
ization by providing noise-free supervision signals.
|
Su_Language_Adaptive_Weight_Generation_for_Multi-Task_Visual_Grounding_CVPR_2023 | Abstract
Although the impressive performance in visual ground-
ing, the prevailing approaches usually exploit the visual
backbone in a passive way, i.e., the visual backbone ex-
tracts features with fixed weights without expression-related
hints. The passive perception may lead to mismatches (e.g.,
redundant and missing), limiting further performance im-
provement. Ideally, the visual backbone should actively
extract visual features since the expressions already pro-
vide the blueprint of desired visual features. The active
perception can take expressions as priors to extract rel-
evant visual features, which can effectively alleviate the
mismatches. Inspired by this, we propose an active per-
ception VisualGrounding framework based on Language
Adaptive Weights, called VG-LAW. The visual backbone
serves as an expression-specific feature extractor through
dynamic weights generated for various expressions. Ben-
efiting from the specific and relevant visual features ex-
tracted from the language-aware visual backbone, VG-LAW
does not require additional modules for cross-modal inter-
action. Along with a neat multi-task head, VG-LAW can be
competent in referring expression comprehension and seg-
mentation jointly. Extensive experiments on four represen-
tative datasets, i.e., RefCOCO, RefCOCO+, RefCOCOg,
and ReferItGame, validate the effectiveness of the proposed
framework and demonstrate state-of-the-art performance.
| 1. Introduction
Visual grounding (such as referring expression compre-
hension [4, 23, 42, 45, 46, 48, 50], referring expression seg-
mentation [6, 14, 17, 23, 32, 33, 44], and phrase grounding
[4, 23, 50]) aims to detect or segment the specific object
*corresponding author.
Linguistic Backbone
Visual Backbone F
(
I
;
W
,
A
)
Visual Backbone F
(
I
;
W
,
A
)
Linguistic Backbone
Visual Backbone F
(
I
;
W
,
A
)
Cross
-
modal
Interaction
Task
-
specific
Head
Cross
-
modal
Interaction
Task
-
specific
Head
Multi
-
task
Head
(
a
)
(
b
)
(
c
)
“
white cake with
cherries
”
weights
stage
1
stage
2
stage
3
stage
4
layer N
-
1
layer N
layer
1
layer
2
Linguistic Backbone
“
white cake with
cherries
”
“
white cake with
cherries
”
A
:
architecture
I
:
image W
:
weights
stage
4
stage
3
Stage
2
stage
1
Figure 1. The comparison of visual grounding frameworks. (a)
The visual and linguistic backbone independently extracts fea-
tures, which are fused through cross-modal interaction. (b) Ad-
ditional designed modules are inserted into the visual backbone to
modulate visual features using linguistic features. (c) VG-LAW
can generate language-adaptive weights for the visual backbone
and directly output referred objects through our designed multi-
task head without additional cross-modal interaction modules.
based on a given natural language description. Compared to
general object detection [38] or instance segmentation [11],
which can only locate objects within a predefined and fixed
category set, visual grounding is more flexible and purpose-
ful. Free-formed language descriptions can specify specific
visual properties of the target object, such as categories, at-
tributes, relationships with other objects, relative/absolute
positions, and etc.
Due to the similarity with detection tasks, previous vi-
sual grounding approaches [23, 33, 46, 50] usually follow
the general object detection frameworks [1,11,37], and pay
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10857
(a)
(b)
(c)
"white bird"
(d)
"right bird"Figure 2. Attention visualization of the visual backbone with dif-
ferent weights. (a) input image, (b) visual backbone with fixed
weights, (c) and (d) visual backbone with weights generated for
“white bird” and “right bird”, respectively.
attention to the design of cross-modal interaction modules.
Despite achieving impressive performance, the visual back-
bone is not well explored. Concretely, the visual backbone
passively extracts visual features with fixed architecture and
weights, regardless of the referring expressions, as illus-
trated in Fig. 1 (a). Such passive feature extraction may
lead to mismatches between the extracted visual features
and those required for various referring expressions, such
as missing or redundant features. Taking Fig. 2 as an exam-
ple, the fixed visual backbone has an inherent preference for
the image, as shown in Fig. 2 (b), which may be irrelevant
to the referring expression “white bird”. Ideally, the visual
backbone should take full advantage of expressions, as the
expressions can provide information and tendencies about
the desired visual features.
Several methods have noticed this phenomenon and pro-
posed corresponding solutions, such as QRNet [45], and
LA VT [44]. Both methods achieve the expression-aware
visual feature extraction by inserting carefully designed
interaction modules (such as QD-ATT [45], and PWAN
[44]) into the visual backbone, as illustrated in Fig. 1 (b).
Concretely, visual features are first extracted and then ad-
justed using QD-ATT (channel and spatial attention) or
PWAM (transformer-based pixel-word attention) in QR-
Net and LA VT at the end of each stage, respectively. Al-
though performance improvement with adjusted visual fea-
tures, the extract-then-adjust paradigm inevitably contains
a large number of feature-extraction components with fixed
weights, e.g., the components belonging to the original
visual backbone in QRNet and LA VT. Considering that
the architecture and weights jointly determine the func-
tion of the visual backbone, this paper adopts a simpler
and fine-grained scheme that modifies the function of the
visual backbone with language-adaptive weights, as illus-
trated in Fig. 1 (c). Different from the extract-then-adjust
paradigm used by QRNet and LA VT, the visual backbone
equipped with language-adaptive weights can directly ex-
tract expression-relevant visual features without additional
feature-adjustment modules.
In this paper, we propose an active perception Visual
Grounding framework based on Language Adaptive
Weights, called VG-LAW. It can dynamically adjust the
behavior of the visual backbone by injecting the informa-tion of referring expressions into the weights. Specifi-
cally, VG-LAW first obtains the specific language-adaptive
weights for the visual backbone through two successive
processes of linguistic feature aggregation and weight gen-
eration. Then, the language-aware visual backbone can
extract expression-relevant visual features without manu-
ally modifying the visual backbone architecture. Since
the extracted visual features are highly expression-relevant,
cross-modal interaction modules are not required for fur-
ther cross-modal fusion, and the entire network architecture
is more streamlined. Furthermore, based on the expression-
relevant features, we propose a lightweight but neat multi-
task prediction head for jointly referring expression com-
prehension (REC) and referring expression segmentation
(RES) tasks. Extensive experiments on RefCOCO [47],
RefCOCO+ [47], RefCOCOg [36], and ReferItGame [19]
datasets demonstrate the effectiveness of our method, which
achieves state-of-the-art performance.
The main contributions can be summarized as follows:
• We propose an active perception visual ground-
ing framework based on the language adaptive
weights, called VG-LAW, which can actively extract
expression-relevant visual features without manually
modifying the visual backbone architecture.
• Benefiting from the active perception of visual feature
extraction, we can directly utilize our proposed neat
but efficient multi-task head for REC and RES tasks
jointly without carefully designed cross-modal inter-
action modules.
• Extensive experiments demonstrate the effectiveness
of our framework, which achieves state-of-the-art per-
formance on four widely used datasets, i.e.,RefCOCO,
RefCOCO+, RefCOCOg, and ReferItGame.
|
Tarasiou_ViTs_for_SITS_Vision_Transformers_for_Satellite_Image_Time_Series_CVPR_2023 | Abstract
In this paper we introduce the Temporo-Spatial Vision
Transformer (TSViT), a fully-attentional model for general
Satellite Image Time Series (SITS) processing based on
the Vision Transformer (ViT). TSViT splits a SITS record
into non-overlapping patches in space and time which
are tokenized and subsequently processed by a factorized
temporo-spatial encoder. We argue, that in contrast to nat-
ural images, a temporal-then-spatial factorization is more
intuitive for SITS processing and present experimental evi-
dence for this claim. Additionally, we enhance the model’s
discriminative power by introducing two novel mechanisms
for acquisition-time-specific temporal positional encodings
and multiple learnable class tokens. The effect of all
novel design choices is evaluated through an extensive
ablation study. Our proposed architecture achieves state-
of-the-art performance, surpassing previous approaches
by a significant margin in three publicly available SITS
semantic segmentation and classification datasets. All
model, training and evaluation codes can be found at
https://github.com/michaeltrs/DeepSatModels .
| 1. Introduction
The monitoring of the Earth surface man-made impacts
or activities is essential to enable the design of effective in-
terventions to increase welfare and resilience of societies.
One example is the sector of agriculture in which monitor-
ing of crop development can help design optimum strategies
aimed at improving the welfare of farmers and resilience of
the food production system. The second of United Nations
Sustainable Development Goals (SDG) of Ending Hunger
relies on increasing the crop productivity and revenues of
farmers in poor and developing countries [35] - approxi-
mately 2.5 billion people’s livelihoods depend mainly on
producing crops [10]. Achieving SDG 2 goals requires to be
able to accurately monitor yields and the evolution of culti-
vated areas in order to measure the progress towards achiev-
ing several goals, as well as to evaluate the effectiveness of
different policies or interventions. In the European Union
Figure 1. Model and performance overview. (top) TSViT archi-
tecture. A more detailed schematic is presented in Fig.4. (bottom)
TSViT performance compared with previous arts (Table 2).
(EU) the Sentinel for Common Agricultural Policy program
(Sen4CAP) [2] focuses on developing tools and analytics to
support the verification of direct payments to farmers with
underlying environmental conditionalities such as the adop-
tion of environmentally-friendly [50] and crop diversifica-
tion [51] practices based on real-time monitoring by the
European Space Agency’s (ESA) Sentinel high-resolution
satellite constellation [1] to complement on site verifica-
tion. Recently, the volume and diversity of space-borne
Earth Observation (EO) data [63] and post-processing tools
[18, 61, 70] has increased exponentially. This wealth of
resources, in combination with important developments in
machine learning for computer vision [20, 28, 53], provides
an important opportunity for the development of tools for
the automated monitoring of crop development.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10418
Towards more accurate automatic crop type recognition,
we introduce TSViT, the first fully-attentional1architecture
for general SITS processing. An overview of the proposed
architecture can be seen in Fig.1 (top). Our novel design
introduces some inductive biases that make TSViT particu-
larly suitable for the target domain:
• Satellite imagery for monitoring land surface variabil-
ity boast a high revisit time leading to long temporal
sequences. To reduce the amount of computation we
factorize input dimensions into their temporal and spa-
tial components, providing intuition (section 3.4) and
experimental evidence (section 4.2) about why the or-
der of factorization matters.
• TSViT uses a Transformer backbone [64] following
the recently proposed ViT framework [13]. As a result,
every TSViT layer has a global receptive field in time
or space, in contrast to previously proposed convolu-
tional and recurrent architectures [14, 24, 40, 45, 49].
• To make our approach more suitable for SITS mod-
elling we propose a tokenization scheme for the in-
put image timeseries and propose acquisition-time-
specific temporal position encodings in order to extract
date-aware features and to account for irregularities in
SITS acquisition times (section 3.6).
• We make modifications to the ViT framework (sec-
tion 3.2) to enhance its capacity to gather class-specific
evidence which we argue suits the problem at hand
and design two custom decoder heads to accommodate
both global and dense predictions (section 3.5).
Our provided intuitions are tested through extensive abla-
tion studies on design parameters presented in section 4.2.
Overall, our architecture achieves state-of-the-art perfor-
mance in three publicly available datasets for classification
and semantic segmentation presented in Table 2 and Fig.1.
|
Sohn_Visual_Prompt_Tuning_for_Generative_Transfer_Learning_CVPR_2023 | Abstract
Learning generative image models from various domains
efficiently needs transferring knowledge from an image syn-
thesis model trained on a large dataset. We present a recipe
for learning vision transformers by generative knowledge
transfer. We base our framework on generative vision trans-
formers representing an image as a sequence of visual to-
kens with the autoregressive or non-autoregressive trans-
formers. To adapt to a new domain, we employ prompt tun-
ing, which prepends learnable tokens called prompts to the
image token sequence and introduces a new prompt design
for our task. We study on a variety of visual domains with
varying amounts of training images. We show the effective-
ness of knowledge transfer and a significantly better image
generation quality.1
| 1. Introduction
Image synthesis has witnessed tremendous progress re-
cently with the advancement of deep generative models [ 2,
1https://github.com/google-research/generative_
transfer12,20,67,69]. An ideal image synthesis system generates
diverse, plausible, and novel scenes capturing the appear-
ance of objects and depicting their interactions. The success
of image synthesis does heavily rely on the availability of a
large amount of diverse training data [ 73].
Transfer learning, a cornerstone invention in deep learn-
ing, has proven indispensable in an array of computer vision
tasks, including classification [ 35], object detection [ 18,19],
image segmentation [ 23,24],etc. However, transfer learn-
ing is not widely used for image synthesis. While recent
efforts have shown success in transferring knowledge from
pre-trained Generative Adversarial Network (GAN) mod-
els [46,60,71,76], their demonstrations are limited to nar-
row visual domains, e.g., faces or cars [ 46,76], as in Fig. 1,
or requiring a non-trivial amount of training data [ 60,71] to
transfer to out-of-distribution domains.
In this work, we approach transfer learning for image
synthesis using generative vision transformers, an emerg-
ing class of image synthesis models, such as DALL ·E[53],
Taming Transformer [ 15], MaskGIT [ 7], CogView [ 13],
N¨UWA [ 75], Parti [ 79], among others, which excel in im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19840
age synthesis tasks. We closely follow the recipe of trans-
fer learning for image classification [ 35], in which a source
model is first trained on a large dataset ( e.g., ImageNet) and
then transferred to a diverse collection of downstream tasks.
Except, in our setting, the input and output are reversed and
the model generates images from a class label.
We present a transfer learning framework using prompt
tuning [38,40]. While the technique has been used for trans-
fer learning of discriminative models for vision tasks [ 1,29],
we appear to be the first to adopt prompt tuning for trans-
fer learning of image synthesis . To this end, we propose a
parameter-efficient design of a prompt token generator that
admits condition variables ( e.g., class), a key for control-
lable image synthesis neglected in prompt tuning for dis-
criminative transfer [ 29,38]. We also introduce a marquee
header prompt that engineers learned prompts to enhance
generation diversity while retaining the generation quality.
We conduct a large-scale study to understand the me-
chanics of transfer learning for generative vision transform-
ers. Two types of generative transformers – AutoRegressive
(AR) andNon-AutoRegressive (NAR) – are examined. AR
transformers ( e.g., DALL ·E[53], Taming Transformer [ 15],
Parti [ 79]) generate image tokens sequentially with an
autoregressive lang |
Tiwary_ORCa_Glossy_Objects_As_Radiance-Field_Cameras_CVPR_2023 | Abstract
Reflections on glossy objects contain valuable and hidden
information about the surrounding environment. By con-
verting these objects into cameras, we can unlock exciting
applications, including imaging beyond the camera’s field-
of-view and from seemingly impossible vantage points, e.g.
from reflections on the human eye. However, this task is
challenging because reflections depend jointly on object ge-
ometry, material properties, the 3D environment, and the
observer’s viewing direction. Our approach converts glossy
objects with unknown geometry into radiance-field cameras
to image the world from the object’s perspective. Our key
insight is to convert the object surface into a virtual sensor
that captures cast reflections as a 2D projection of the 5D
environment radiance field visible to and surrounding the
object. We show that recovering the environment radiance
fields enables depth and radiance estimation from the ob-
ject to its surroundings in addition to beyond field-of-view
*Equal contributionnovel-view synthesis, i.e. rendering of novel views that are
only directly visible to the glossy object present in the scene,
but not the observer. Moreover, using the radiance field we
can image around occluders caused by close-by objects in
the scene. Our method is trained end-to-end on multi-view
images of the object and jointly estimates object geometry,
diffuse radiance, and the 5D environment radiance field.
For more information, visit our website.
| 1. Introduction
Imagine that you’re driving down a city street that is
packed with lines of parked cars on both sides. Inspection
of the cars’ glass windshields, glossy paint, and plastic re-
veal sharp, but faint and distorted views of the surroundings
that might be otherwise hidden from you. Humans can infer
depth and semantic cues about the occluded areas in the en-
vironment by processing reflections visible on reflective ob-
jects, internally decomposing the object geometry and radi-
ance from the specular radiance being reflected onto it. Our
aim is to decompose the object from its reflections to “see”
the world from the object’s perspective, effectively turning
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20773
Figure 2. ORCa Overview . We jointly estimate the object’s geometry and diffuse along with the environment radiance field estimation
through a three-step approach. First, we model the object as a neural implicit surface (a). We model the reflections as probing the
environment on virtual viewpoints (b) estimated analytically from surface properties. We model the environment as a radiance field
queried on these viewpoints (c). Both neural implicit surface and environment radiance field are trained jointly on multi-view images of
the object using a photometric loss.
the object into a camera that images its environment. How-
ever, reflections pose a long-standing challenge in computer
vision as the reflections are a 2D projection of an unknown
3D environment that is distorted based on the shape of the
reflector.
To capture the 3D world from the object’s perspective,
we model the object’s surface as a virtual sensor that cap-
tures the 2D projection of a 5D environment radiance field
surrounding the object. This environment radiance field
consists largely of areas only visible to the observer through
the object’s reflections. Our use of environment radiance
fields not only enables depth and radiance estimation from
the object to its surroundings but also enables beyond field-
of-view novel-view synthesis, i.e. rendering of novel views
that are only directly visible to the glossy object present
in the scene but not the observer. Unlike conventional ap-
proaches that model the environment as a 2D map, our ap-
proach models it as a 5D field without assuming the scene is
infinitely far away. Moreover, by sampling the 5D radiance
field, instead of a 2D map, we can capture depth and images
around occluders, such as close-by objects in the scene, as
shown in Fig. 3. These applications cannot be done from a
2D environment map.
We aim to decompose reflections on the object’s surface,
from its surface and exploit those reflections to construct
a radiance field surrounding the object, therefore captur-
ing the 3D world in the process. This is a challenging task
because the reflections are extremely sensitive to local ob-
ject geometry, viewing direction and inter-reflections due
to the object’s surface. To capture this radiance field, we
convert glossy objects with unknown geometry and texture
into radiance-field cameras. Specifically, we exploit neural
rendering to estimate the local surface of the object viewed
Figure 3. Advantages of 5D environment radiance field . Mod-
eling reflections on object surfaces (a) as a 5D env. radiance field
enables beyond field-of-view novel-view synthesis, including ren-
dering of the environment from translated virtual camera views
(b). Depth (c) and environment radiance of translated and parallax
views can further enable imaging behind occluders, for example
revealing the tails behind the primary Pokemon occluders (d).
from each pixel of the real camera. We then convert this
local surface into a virtual pixel that captures radiance from
the environment. This virtual pixel captures the environ-
ment radiance as shown in Fig 5. We estimate the outgo-
20774
ing frustum from the virtual pixel as a cone that samples
the scene. By sampling the scene from many virtual pix-
els on the object surface, we construct an environment ra-
diance field that can be queried independently of the object
surface, enabling beyond field-of-view novel-view synthesis
from previously unsampled viewpoints.
Our approach jointly estimates object geometry, diffuse
radiance, and the environment radiance field from multi-
view images of glossy objects with unknown geometry and
diffuse texture in three steps. First, we use neural signed
distance functions (SDF) and an MLP to model the glossy
object’s geometry as a neural implicit surface and diffuse
radiance, respectively, similar to PANDORA [10]. Then,
for every pixel on the observer’s camera, we estimate the
virtual pixels on the object’s surface based on the estimated
local geometry from the neural SDF. We analytically com-
pute the parameters of the virtual cone through the virtual
pixel. Lastly, we use the cone formulation in MipNeRF [5]
to cast virtual cones from the virtual camera to recover the
environment radiance .
To summarize, we make the following contributions:
• We present a method to convert implicit surfaces into
virtual sensors that can image their surroundings using
virtual cones. (Sec. 3.3)
• We jointly estimate object geometry, diffuse radiance,
and estimate the 5D environment radiance field sur-
rounding the object. (Fig. 7 & 9)
• We show that the environment radiance field can be
queried to perform beyond-field-of-view novel view-
point synthesis, i.e render views only visible to the ob-
ject in the scene (Section 3.4)
Scope. We only model glossy objects with low rough-
ness as such specular reflections tend to have a high signal-
to-noise ratio, therefore, are a sharper estimate of the en-
vironment radiance field. However, we note that the vir-
tual cone computation can be extended to model the cone
radius as a function of surface roughness. Deblurring ap-
proaches can further improve the resolution of estimated
environment. In addition, we approximate the local curva-
ture using mean curvature, which fails for objects with vary-
ing radius of curvature along the tangent space. We explain
how our virtual cone curvature estimation can be extended
to handle general shape operators in the supplementary ma-
terial. Lastly, similar to other multi-view approaches, our
approach relies on a sufficient virtual baseline between vir-
tual viewpoints to recover the environment radiance field.
|
Urooj_Learning_Situation_Hyper-Graphs_for_Video_Question_Answering_CVPR_2023 | Abstract
Answering questions about complex situations in videos
requires not only capturing the presence of actors, objects,
and their relations but also the evolution of these relation-
ships over time. A situation hyper-graph is a representa-
tion that describes situations as scene sub-graphs for video
frames and hyper-edges for connected sub-graphs and has
been proposed to capture all such information in a compact
structured form. In this work, we propose an architecture for
Video Question Answering (VQA) that enables answering
questions related to video content by predicting situation
hyper-graphs, coined Situation Hyper-Graph based Video
Question Answering (SHG-VQA). To this end, we train a
situation hyper-graph decoder to implicitly identify graph
representations with actions and object/human-object rela-
tionships from the input video clip. and to use cross-attention
between the predicted situation hyper-graphs and the ques-
tion embedding to predict the correct answer. The proposed
method is trained in an end-to-end manner and optimized by
a VQA loss with the cross-entropy function and a Hungarian
matching loss for the situation graph prediction. The effec-
tiveness of the proposed architecture is extensively evaluated
on two challenging benchmarks: AGQA and STAR. Our
results show that learning the underlying situation hyper-
graphs helps the system to significantly improve its perfor-
mance for novel challenges of video question-answering
tasks1.
| 1. Introduction
Video question answering in real-world scenarios is a
challenging task as it requires focusing on several factors
including the perception of the current scene, language un-
derstanding, situated reasoning, and future prediction. Visual
perception in the reasoning task requires capturing various
aspects of visual understanding, e.g., detecting a diverse set
1Code will be available at https://github.com/aurooj/SHG-
VQA
Figure 1. The situation hyper-graph for a video is composed of
situations with entities and their relationships (shown as subgraphs
in the pink box). These situations may evolve over time. Temporal
actions act as hyper-edges connecting these situations into one situ-
ation hyper-graph. Learning situation graphs, as well as temporal
actions, is vital for reasoning-based video question answering.
of entities, recognizing their interactions, as well as under-
standing the changing dynamics between these entities over
time. Similarly, linguistic understanding has its challenges
as some question or answer concepts may not be present in
the input text or video.
Visual question answering, as well as its extension over
time, video question answering, have both benefited from
representing knowledge in graph structures, e.g., scene
graphs [20, 35], spatio-temporal graphs [4, 55], and knowl-
edge graphs [36, 45]. Another approach in this direction is
the re-introduction of the concept of “ situation cognition ”
embodied in “ situation hyper-graphs ” [47]. This adds the
computation of actions to the graphs that capture the interac-
tion between entities. In this case, situations are represented
by hyper-graphs that join atomic entities and relations (e.g.,
agents, objects, and relationships) with their actions (Fig.
1). This is an ambitious task for existing systems as it is
impractical to encapsulate all possible interactions in the
real-world context.
Recent work [23] shows that transformers are capable
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14879
of learning graphs without adapting graph-specific details
in the architectures achieving competitive or even better
performance than sophisticated graph-specific models. Our
work supports this idea by implicitly learning the underlying
hyper-graphs of a video. Thus, it requires no graph compu-
tation for inference and uses decoder’s output directly for
cross attention module. More precisely, we propose to learn
situation hyper-graphs, namely framewise actor-object and
object-object relations as well as their respective actions,
from the input video directly without the need for explicit
object detection or other required prior knowledge. While
the actions capture events across transitions over multiple
frames, such as Drinking from a bottle , the relationship en-
coding actually considers all possible combinations of static,
single frame actor-object, and object-object relationships as
unique classes, e.g., in the form of person – hold – bottle
orbottle – stands on – table , thus serving as an object and
relation classifier. Leveraging this setup allows us to stream-
line the spatio-temporal graph learning as a set prediction
task for predicting relationship predicates and actions in a
Situation hyper-graph Decoder block. To train the Situation
Graph Decoder, we use a bipartite matching loss between
the predicted set and ground truth hyper-graph tokens. The
output of the situation graph decoder is a set of action and
relationship tokens, which are then combined with the em-
bedding of the associated question to derive the final answer.
An overview of the proposed architecture is given in Fig. 2.
Note that, compared to other works targeting video scene
graph generation, e.g., those listed in [63], we are less fo-
cused on learning the best possible scene graph, but rather
on learning the representation of the scene which best sup-
ports the question answering task. Thus, while capturing the
essence of a scene, as well as the transition from one scene
to the other, we are not only optimizing the scene graph
accuracy but also considering the VQA loss.
We evaluate the proposed method on two challenging
video question answering benchmarks: a) STAR [47], fea-
turing four different question types, interaction, sequence,
prediction, and feasibility based on a subset of the real-
world Charades dataset [44]; and b) Action Genome QA
(AGQA) [11] dataset which tests vision focused reasoning
skills based on novel compositions, novel reasoning steps,
and indirect references. Compared to other VQA datasets,
these datasets provide dense ground truth hyper-graph infor-
mation for each video, which allows us to learn the respective
embedding. Our results show that the proposed hyper-graph
encoding significantly improves VQA performance as it has
the ability to infer correct answers from spatio-temporal
graphs from the input video. Our ablations further reveal
that achieving high-quality graphs can be critical for VQA
performance.
Our contributions to this paper are as follows:
•We introduce a novel architecture that enables the com-putation of situation hyper-graphs from video data to
solve the complex reasoning task of video question-
answering;
•We propose a situation hyper-graph decoder module
to decode the atomic actions and object/actor-object
relationships and model the hyper-graph learning as a
transformer-based set prediction task and use a set pre-
diction loss function to predict actions and relationships
between entities in the input video;
•We use the resulting high-level embedding information
as sole visual information for the reasoning and show
that this is sufficient for an effective VQA system.
|
Somasundaram_Role_of_Transients_in_Two-Bounce_Non-Line-of-Sight_Imaging_CVPR_2023 | Abstract
The goal of non-line-of-sight (NLOS) imaging is to image
objects occluded from the camera’s field of view using mul-
tiply scattered light. Recent works have demonstrated the
feasibility of two-bounce (2B) NLOS imaging by scanning
a laser and measuring cast shadows of occluded objects in
scenes with two relay surfaces. In this work, we study the
role of time-of-flight (ToF) measurements, i.e. transients, in
2B-NLOS under multiplexed illumination. Specifically, we
study how ToF information can reduce the number of mea-
surements and spatial resolution needed for shape recon-
struction. We present our findings with respect to trade-
offs in (1) temporal resolution, (2) spatial resolution, and
(3) number of image captures by studying SNR and recov-
erability as functions of system parameters. This leads to
a formal definition of the mathematical constraints for 2B
lidar. We believe that our work lays an analytical ground-
work for design of future NLOS imaging systems, especially
as ToF sensors become increasingly ubiquitous. | 1. Introduction
Non-line-of-sight (NLOS) imaging aims to reconstruct
objects occluded from direct line of sight and has the po-
tential to be transformative in numerous applications across
autonomous driving, search and rescue, and non-invasive
medical imaging [24]. The key approach is to measure light
that has undergone multiple surface scattering events and
computationally invert these measurements to estimate hid-
den geometries. Recent work used two-bounce light, as
shown in Fig. 2a, to reconstruct high quality shapes behind
occluders [15]. The key idea is that two-bounce light cap-
tures information about the shadows of the occluded object.
By scanning the laser source at different points lon a relay
surface and measuring multiple shadow images, it is possi-
ble to reconstruct the hidden object by computing the visual
hull [20] of the measured shadows.
Benefits of Two-Bounce Two-bounce (2B) light can be
captured using two relay surfaces on opposite sides of the
hidden scene (Fig. 1a). This can occur in a variety of real-
world settings such as tunnels, hallways, streets, and clut-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9192
𝒔!𝒔"
Cast shadowsOccluderCameraLaserDiffuse Surface𝒍#𝒍$𝒍"Occluded Object𝒔!𝒔"
Mixed shadowsOccluded ObjectOccluderSPADLaserDiffuse Surface𝒍#𝒍$𝒍"
𝒔!%"
(a) Two-Bounce NLOS Imaging (b) Two-Bounce Transient NLOS Figure 2. Two-Bounce NLOS Imaging. (a) Two-bounce NLOS
imaging performs 3D reconstruction by using information con-
tained in the shadows of the hidden object when illuminated by
one virtual source lat a time. (b) In this work, we study the bene-
fits of using transient information for two-bounce NLOS imaging
under the presence of multiplexed illumination.
tered environments like forests (Fig. 1b). For example,
imagine an autonomous vehicle in a tunnel being able to
see ahead of the car in front of it by using two-bounce sig-
nals from the sides of the tunnel. Two-bounce light also
helps alleviate the signal-to-noise ratio (SNR) issue in 3B-
NLOS imaging, since 3B signals experience significant sig-
nal attenuation at each surface scatter event. Furthermore,
2B light captures information about the cast shadows of the
hidden object instead of reflectance, as in the 3B case. As a
result, the SNR of 2B signals is independent of the object’s
albedo and can therefore robustly image darker objects.
Contributions In this work, we propose using two-
bounce transients for NLOS imaging to enable the use of
multiplexed illumination. 3B-NLOS transient imaging sys-
tems have been analyzed previously, but we believe we are
the first to analyze the use of multiplexed illumination and
transient measurements for 2B-NLOS (Fig. 2b). Our key in-
sight is that multiplexed illumination enables measurement
of occluded objects with fewer image captures, and 2B tran-
sients enable demultiplexing shadows, as shown in Fig. 3.
The robust SNR and promise of few-shot capture afforded
by 2B transients makes the idea an important direction for
few-shot NLOS imaging in general environments. We sum-
marize our contributions below.
•Model : We present new insights into the algebraic for-
mulations of two-bounce lidar for NLOS imaging.
•Analysis : We analyze tradeoffs with spatial resolution,
temporal resolution, and number of image captures by
using quantitative metrics such as SNR and recover-
ability, as well as qualitative validation from real and
simulated results.
Although we envision the ideas introduced here to inspire
future works in single-shot NLOS imaging, we do notclaim
this as a contribution in this paper.Scope of This Work All data used for this work is from
simulation and a single-pixel SPAD sensor. We do not phys-
ically realize few-shot results in this paper due to limited
availability of high-resolution SPAD arrays presently. In-
stead, we emulate SPAD array measurements by scanning
a single-pixel SPAD across the field of view and and emu-
late multiplexed illumination by scanning a laser spot, ac-
quiring individual transient images, and summing the im-
ages in post-processing. These measurements, however, are
more conducive to our goal of performing analysis that ex-
plores the landscape of possibilities in NLOS imaging with
advances in ToF sensors. We believe that the ideas and anal-
ysis introduced in this work will be increasingly relevant as
SPAD array technology matures [19, 26, 44].
|
Smith_CODA-Prompt_COntinual_Decomposed_Attention-Based_Prompting_for_Rehearsal-Free_Continual_Learning_CVPR_2023 | Abstract
Computer vision models suffer from a phenomenon
known as catastrophic forgetting when learning novel con-
cepts from continuously shifting training data. Typical solu-
tions for this continual learning problem require extensive
rehearsal of previously seen data, which increases memory
costs and may violate data privacy. Recently, the emer-
gence of large-scale pre-trained vision transformer mod-
els has enabled prompting approaches as an alternative
to data-rehearsal. These approaches rely on a key-query
mechanism to generate prompts and have been found to
be highly resistant to catastrophic forgetting in the well-
established rehearsal-free continual learning setting. How-
ever, the key mechanism of these methods is not trained
end-to-end with the task sequence. Our experiments show
that this leads to a reduction in their plasticity, hence sac-
rificing new task accuracy, and inability to benefit from ex-
panded parameter capacity. We instead propose to learn a
set of prompt components which are assembled with input-
conditioned weights to produce input-conditioned prompts,
resulting in a novel attention-based end-to-end key-query
scheme. Our experiments show that we outperform the cur-
rent SOTA method DualPrompt on established benchmarks
by as much as 4.5% in average final accuracy. We also
outperform the state of art by as much as 4.4% accuracy
on a continual learning benchmark which contains both
class-incremental and domain-incremental task shifts, cor-
responding to many practical settings. Our code is avail-
able at https://github.com/GT-RIPL/CODA-Prompt
| 1. Introduction
For a computer vision model to succeed in the real world,
it must overcome brittle assumptions that the concepts it
will encounter after deployment will match those learned
a-priori during training. The real world is indeed dynamic
and contains continuously emerging objects and categories.
Once deployed, models will encounter a range of differ-
ences from their training sets, requiring us to continuously
𝚺Query
valueprompt
key
component weightAttentionα1
α2
α3
αMk1
k2
k3
kMv1
vMv2
v3
c1
cMc2
c3×
promptModel
Model
…
………Prior Work
Our Workgradient flow
from model
Querylearned in separate
optimization without
model gradientsseparate
optimizationFigure 1. Prior work [65] learns a pool of key-value pairs to select
learnable prompts which are inserted into several layers of a pre-
trained ViT (the prompting parameters are unique to each layer).
Our work introduces a decomposed prompt which consists of
learnable prompt components that assemble to produce attention-
conditioned prompts. Unlike prior work, ours is optimized in an
end-to-end fashion (denoted with thick, green lines).
update them to avoid stale and decaying performance.
One way to update a model is to collect additional train-
ing data, combine this new training data with the old train-
ing data, and then retrain the model from scratch. While
this will guarantee high performance, it is not practical for
large-scale applications which may require lengthy training
times and aggressive replacement timelines, such as models
for self-driving cars or social media platforms. This could
incur high financial [26] and environmental [33] costs if the
replacement is frequent. We could instead update the model
by only training on the new training data, but this leads to
a phenomenon known as catastrophic forgetting [46] where
the model overwrites existing knowledge when learning the
new data, a problem known as continual learning .
The most effective approaches proposed by the continual
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11909
learning community involve saving [50] or generating [55]
a subset of past training data and mixing it with future task
data, a strategy referred to as rehearsal . Yet many impor-
tant applications are unable to store this data because they
work with private user data that cannot be stored long term.
In this paper, we instead consider the highly-impactful
and well-established setting of rehearsal-free continual
learning [38, 56, 57, 65, 66]1, limiting our scope to strate-
gies for continual learning which do not store training
data. While rehearsal-free approaches have classically
under-performed rehearsal-based approaches in this chal-
lenging setting by wide-margins [57], recent prompt-based
approaches [65, 66], leveraging pre-trained Vision trans-
formers (ViT), have had tremendous success and can even
outperform state-of-the-art (SOTA) rehearsal-based meth-
ods. These prompting approaches boast a strong protection
against catastrophic forgetting by learning a small pool of
insert-able model embeddings (prompts) rather than mod-
ifying vision encoder parameters directly. One drawback
of these approaches, however, is that they cannot be opti-
mized in an end-to-end fashion as they use a key and query
to select a prompt index from the pool of prompts, and thus
rely on a second, localized optimization to learn the keys
because the model gradient cannot backpropagate through
the key/query index selection. Furthermore, these meth-
ods reduce forgetting by sacrificing new task accuracy (i.e.,
they lack sufficient plasticity ). We show that expanding the
prompt size does not increase the plasticity, motivating us
to grow learning capacity from a new perspective.
We replace the prompt pool with a decomposed prompt
that consists of a weighted sum of learnable prompt com-
ponents (Figure 1). This decomposition enables higher
prompting capacity by expanding in a new dimension (the
number of components). Furthermore, this inherently en-
courages knowledge re-use, as future task prompts will in-
clude contributions from prior task components. We also
introduce a novel attention-based component-weighting
scheme, which allows for our entire method to be optimized
in an end-to-end fashion unlike the existing works, increas-
ing our plasticity to better learn future tasks. We boast sig-
nificant performance gains on existing rehearsal-free con-
tinual learning benchmarks w.r.t. SOTA prompting base-
lines in addition to a challenging dual-shift benchmark con-
taining both incremental concept shifts and covariate do-
main shifts, highlighting our method’s impact and gener-
ality. Our method improves results even under equivalent
parameter counts, but importantly can scale performance
by increasing capacity, unlike prior methods which quickly
plateau. In summary, we make the following contributions:
1We focus on continual learning over a single, expanding classification
head called class-incremental continual learning . This is different from the
multi-task continual learning setting, known as task-incremental continual
learning, where we learn separate classification heads for each task and the
task label is provided during inference. [25, 61]1. We introduce a decomposed attention-based prompt-
ing for rehearsal-free continual learning, characterized
by an expanded learning capacity compared to exist-
ing continual prompting approaches. Importantly, our
approach can be optimized in an end-to-end fashion,
unlike the prior SOTA.
2. We establish a new SOTA for rehearsal-free continual
learning on the well-established ImageNet-R [21, 65]
and CIFAR-100 [32] benchmarks, beating the previous
SOTA method DualPrompt [65] by as much as 4.5%.
3. We evaluate on a challenging benchmark with dual
distribution shifts (semantic and covariate) using the
ImageNet-R dataset, and again outperform the state of
art, highlighting the real-world impact and generality
of our approach.
|
Tian_GeoMAE_Masked_Geometric_Target_Prediction_for_Self-Supervised_Point_Cloud_Pre-Training_CVPR_2023 | Abstract
This paper tries to address a fundamental question in
point cloud self-supervised learning: what is a good sig-
nal we should leverage to learn features from point clouds
without annotations? To answer that, we introduce a point
cloud representation learning framework, based on geo-
metric feature reconstruction. In contrast to recent papers
that directly adopt masked autoencoder (MAE) and only
predict original coordinates or occupancy from masked
point clouds, our method revisits differences between im-
ages and point clouds and identifies three self-supervised
learning objectives peculiar to point clouds, namely cen-
troid prediction, normal estimation, and curvature predic-
tion. Combined, these three objectives yield an nontrivial
self-supervised learning task and mutually facilitate mod-
els to better reason fine-grained geometry of point clouds.
Our pipeline is conceptually simple and it consists of two
major steps: first, it randomly masks out groups of points,
followed by a Transformer-based point cloud encoder; sec-
ond, a lightweight Transformer decoder predicts centroid,
normal, and curvature for points in each voxel. We transfer
the pre-trained Transformer encoder to a downstream pe-
ception model. On the nuScene Datset, our model achieves
3.38 mAP improvment for object detection, 2.1 mIoU gain
for segmentation, and 1.7 AMOTA gain for multi-object
tracking. We also conduct experiments on the Waymo Open
Dataset and achieve significant performance improvements
over baselines as well.1
| 1. Introduction
While object detection and segmentation from LiDAR
point clouds have achieved significant progress, these mod-
els usually demand a large amount of 3D annotations that
are hard to acquire. To alleviate this issue, recent works ex-
plore learning representations from unlabeled point clouds,
*Corresponding to: [email protected]
1Our code is available at https://github.com/Tsinghua-
MARS-Lab/GeoMAE .
Pre-trained with geometric featuresPre-trained with point valuesPrediction TargetPerformance
+ 0.9+ 3.4
ScratchMasked Token Prediction
Pixel values: brightness Point values: coordinates
Geometric features
P
Figure 1. Pixel value regression has been proved effective in
masked autoencoder pre-training for images. We find this practice
ineffective in point cloud pre-training and propose a set of geome-
try aware prediction targets.
such as contrastive learning [20, 46, 53], and mask model-
ing [32, 51]. Similar to image-based representation learn-
ing settings, these representations are transferred to down-
stream tasks for weight initialization. However, the exist-
ing self-supervised pretext tasks do not bring adequate im-
provements to the downstream tasks as expected.
Contrastive learning based methods typically encode dif-
ferent ‘views’ (potentially with data augmentation) of point
clouds into feature space. They bring features of the same
point cloud closer and make features of different point
clouds ‘repel’ each other. Other recent works use masked
modeling to learn point cloud features through self re-
construction [32, 51]. That is, randomly sparsified point
clouds are encoded by point cloud feature extractors, fol-
lowed by a reconstruction module to predict original point
clouds. These methods, when applied to point clouds, ig-
nore the fundamental difference of point clouds from im-
ages – point clouds provide scene geometry while images
provide brightness. As shown in Figure 1, this modality
disparity hampers direct use of methods developed in the
image domain for point cloud domain, and thus calls for
novel self-supervised objectives dedicated to point clouds.
Inspired by modeling and computational techniques in
geometry processing, we introduce a self-supervised learn-
ing framework dedicated to point clouds. Most importantly,
we design a series of prediction targets which describe the
fine-grained geometric features of the point clouds. These
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13570
geometric feature prediction tasks jointly drive models to
recognize different shapes and areas of scenes. Concretely,
our method starts with a point cloud voxelizer, followed
by a feature encoder to transform each voxel into a feature
token. These feature tokens are randomly dropped based
on a pre-defined mask ratio. Similar to the original MAE
work [18], visible tokens are encoded by a Transformer en-
coder. Then a Transformer decoder reconstructs the fea-
tures of the original voxelized point clouds. Finally, our
model predicts point statistics and surface properties in par-
allel branches.
We conduct experiments on a diverse set of large-
scale point cloud datasets including nuScenes [4] and
Waymo [39]. Our setting consists of a self-supervised pre-
training stage and a downstream task stage (3D detection,
3D tracking, segmentation), where they share the same
point cloud backbone. Our results show that even with-
out additional unlabeled point clouds, self-supervised pre-
training with objectives proposed by this paper can signif-
icantly boost the performance of 3D object detection. To
summarize, our contributions are:
• We introduce geometry aware self-supervised objec-
tives for point clouds pre-training. Our method lever-
ages fine-grained point statistics and surface properties
to enable effective representation learning.
• With our novel learning objectives, we achieve state-
of-the-art performance compared to previous 3D self-
supervised learning methods on a variety of down-
stream tasks including 3D object detection, 3D/BEV
segmentation, and 3D multi-object tracking.
• We conduct comprehensive ablation studies to under-
stand the effectiveness of each module and learning
objective in our approach.
|
Tao_Siamese_Image_Modeling_for_Self-Supervised_Vision_Representation_Learning_CVPR_2023 | Abstract
Self-supervised learning (SSL) has delivered superior
performance on a variety of downstream vision tasks. Two
main-stream SSL frameworks have been proposed, i.e., In-
stance Discrimination (ID) and Masked Image Modeling
(MIM). ID pulls together representations from different
views of the same image, while avoiding feature collapse.
It lacks spatial sensitivity, which requires modeling the lo-
cal structure within each image. On the other hand, MIM
reconstructs the original content given a masked image.
It instead does not have good semantic alignment, which
requires projecting semantically similar views into nearby
representations. To address this dilemma, we observe that
(1) semantic alignment can be achieved by matching dif-
ferent image views with strong augmentations; (2) spatial
sensitivity can benefit from predicting dense representations
with masked images. Driven by these analysis, we propose
Siamese Image Modeling (SiameseIM), which predicts the
dense representations of an augmented view, based on an-
other masked view from the same image but with different
augmentations. SiameseIM uses a Siamese network with
two branches. The online branch encodes the first view,
and predicts the second view’s representation according to
the relative positions between these two views. The target
branch produces the target by encoding the second view.
SiameseIM can surpass both ID and MIM on a wide range
of downstream tasks, including ImageNet finetuning and
linear probing, COCO and LVIS detection, and ADE20k se-
mantic segmentation. The improvement is more significant
in few-shot, long-tail and robustness-concerned scenarios.
Code shall be released.
∗Equal contribution.†This work is done when Chenxin Tao and
Weijie Su are interns at Shanghai Artificial Intelligence Laboratory.
BCorresponding author. | 1. Introduction
Self-supervised learning (SSL) has been pursued in the
vision domain for a long time [32]. It enables us to pre-
train models without human-annotated labels, which makes
it possible to exploit huge amounts of unlabeled data. SSL
has provided competitive results against supervised pre-
training baselines in various downstream tasks, including
image classification [4, 23], object detection [35] and se-
mantic segmentation [24].
To effectively train models in the SSL manner, re-
searchers design the so-called “pretext tasks” to generate
supervision signals. One of the most typical frameworks
isInstance Discrimination (ID) , whose core idea is to pull
together representations of different augmented views from
the same image, and avoid representational collapse. Differ-
ent variants of ID have been proposed, including contrastive
learning [9, 24], asymmetric networks [12, 21], and feature
decorrelation [5, 57]. A recent work [43] has shown the in-
trinsic consistency among these methods via their similar
gradient structures. For ID methods, the representations of
each image are well separated, thus inducing good linear
separability. However, as shown in [35], for transfer learn-
ing on detection tasks with Vision Transformers [19], ID is
not superior to supervised pre-training, and even lags be-
hind random initialization given enough training time.
Recently, another SSL framework has gradually at-
tracted more attention, namely Masked Image Modeling
(MIM) [4,23]. MIM methods train the model to reconstruct
the original content from a masked image. Such practice
can help to learn the rich local structures within an image,
leading to excellent performance in dense prediction tasks
such as object detection [35]. Nevertheless, MIM does not
have good linear separability as ID, and usually performs
poorly under the few-shot classification settings [1].
Both ID and MIM methods have their own strengths and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2132
ImageNet COCO ADE20k LVIS Robustness
FT LIN FT 1% APbAPmmIoU APb
rare APm
rare avg score∗
MoCo-v3 (ID method) 83.0 76.7 63.4 47.9 42.7 47.3 25.5 25.8 43.4
MAE (MIM method) 83.6 68.0 51.1 51.6 45.9 48.1 29.3 29.1 41.8
SiameseIM (ours) 84.1 78.0 65.1 52.1 46.2 51.1 30.9 30.1 47.9
Improve w.r.t. MoCo-v3 +1.1 +1.3 +1.7 +4.2 +3.5 +3.8 +5.4 +4.3 +4.5
Improve w.r.t. MAE +0.5 +10.0 +14.0 +0.5 +0.3 +3.0 +1.6 +1.0 +6.1
Table 1. SiameseIM surpasses MoCo-v3 (ID method) and MAE (MIM method) on a wide range of downstream tasks.∗The robustness
average score is calculated by averaging top-1 acc of IN-A, IN-R, IN-S, and 1-mCE of IN-C. For detailed results, please refer to Section 4.2.
Masked Image ModelingInstance Discrimination
Siamese Image Modeling (ours)
spatial
augmentationscolor
augmentationsmask
augmentationsOnline Model …
predictionOnline Model
prediction
Target Model
target
Online Model
Target ModelDense
LossGlobal
Loss
Dense
Loss…
target
…
prediction…
targetspatial
augmentationsmask
augmentationsspatial
augmentationscolor
augmentations
Figure 1. Comparisons among ID, MIM and SiameseIM. Matching different augmented views can help to learn semantic alignment, which
is adopted by ID and SiameseIM. Predicting dense representations from masked images is beneficial to obtain spatial sensitivity, which is
adopted by MIM and SiameseIM.
weaknesses. We argue that this dilemma is caused by ne-
glecting the representation requirements of either seman-
tic alignment or spatial sensitivity. Specifically, MIM op-
erates within each image independently, regardless of the
inter-image relationship. The representations of semanti-
cally similar images are not well aligned, which further re-
sults in poor linear probing and few-shot learning perfor-
mances of MIM. On the other hand, ID only uses a global
representation for the whole image, and thus fails to model
the intra-image structure. The spatial sensitivity of features
is therefore missing, and ID methods usually produce infe-
rior results on dense prediction.
To overcome this dilemma, we observe the key factors
for semantic alignment and spatial sensitivity: (1) semantic
alignment requires that images with similar semantics are
projected into nearby representations. This can be achieved
by matching different augmented views from the same im-
age. Strong augmentations are also beneficial because they
provide more invariance to the model; (2) spatial sensitiv-
ity needs modeling the local structures within an image.
Predicting dense representations from masked images thus
helps, because it models the conditional distribution of im-
age content within each image. These observations motivate
us to predict the dense representations of an image from amasked view with different augmentations.
To this end, we propose Siamese Image Modeling
(SiameseIM), which reconstructs the dense representations
of an augmented view, based on another masked view
from the same image but with different augmentations (see
Fig. 1). It adopts a Siamese network with an online and a
target branch. The online branch consists of an encoder that
maps the first masked view into latent representations, and
a decoder that reconstructs the representations of the sec-
ond view according to the relative positions between these
two views. The target branch only contains a momentum
encoder that encodes the second view into the prediction
target. The encoder is made up of a backbone and a projec-
tor. After the pre-training, we only use the online backbone
for downstream tasks.
As shown in Tab. 1, SiameseIM is able to surpass both
MIM and ID methods over a wide range of evaluation tasks,
including full-data fine-tuning, few-shot learning and linear
probing on ImageNet [16], object detection on COCO [36]
and LVIS [22], semantic segmentation on ADE20k [58],
as well as several robustness benchmarks [26–28, 47]. By
gathering semantic alignment and spatial sensitivity in one
model, SiameseIM can deliver superior results for all tasks.
We also note that such improvements are more obvious on
2133
ADE20k( ∼3 points) and LVIS( ∼1.6 point for rare classes)
datasets. These long-tailed datasets demands semantic
alignment and spatial sensitivity at the same time, and
SiameseIM thus can deliver superior performance on them.
Our contributions can be summarized as follows:
• As a new form of SSL, SiameseIM is proposed to explore
the possibilities of self-supervised pre-training. It dis-
plays for the first time that that using only a single dense
loss is enough to learn semantic alignment and spatial
sensitivity well at the same time;
• Compared with MIM methods, SiameseIM shows that re-
constructing another view helps to obtain good semantic
alignment. This also suggests that MIM framework can
be used to reconstruct other targets with proper guidance,
which opens a possible direction for MIM pretraining;
• Compared with ID methods, SiameseIM shows that
dense supervision can be applied by matching the dense
correspondence between two views strictly through their
relative positions. We demonstrate dense supervision can
bring a considerable improvement of spatial sensitivity;
• SiameseIM is able to surpass both MIM and ID meth-
ods over a wide range of tasks. SiameseIM obtains more
improvements in few-shot, long-tail and robustness-
concerned scenarios.
|
Su_Physics-Driven_Diffusion_Models_for_Impact_Sound_Synthesis_From_Videos_CVPR_2023 | Abstract
Modeling sounds emitted from physical object interac-
tions is critical for immersive perceptual experiences in real
and virtual worlds. Traditional methods of impact sound
synthesis use physics simulation to obtain a set of physics
parameters that could represent and synthesize the sound.
However, they require fine details of both the object geome-
tries and impact locations, which are rarely available in
the real world and can not be applied to synthesize impact
sounds from common videos. On the other hand, existing
video-driven deep learning-based approaches could only
capture the weak correspondence between visual content
and impact sounds since they lack of physics knowledge. In
this work, we propose a physics-driven diffusion model that
can synthesize high-fidelity impact sound for a silent video
clip. In addition to the video content, we propose to use ad-
ditional physics priors to guide the impact sound synthesis
procedure. The physics priors include both physics parame-
ters that are directly estimated from noisy real-world impact
sound examples without sophisticated setup and learned
residual parameters that interpret the sound environment
via neural networks. We further implement a novel diffusion
model with specific training and inference strategies to com-
bine physics priors and visual information for impact sound
synthesis. Experimental results show that our model outper-
forms several existing systems in generating realistic impact
sounds. Lastly, the physics-based representations are fully
interpretable and transparent, thus allowing us to perform
sound editing flexibly. We encourage the readers visit our
project page1to watch demo videos with the audio turned
on to experience the result.
| 1. Introduction
Automatic sound effect production has become demand-
ing for virtual reality, video games, animation, and movies.
Traditional movie production heavily relies on talented Foley
artists to record many sound samples in advance and man-
*Work done while interning at MIT-IBM Watson AI Lab
1https://sukun1045.github.io/video-physics-sound-
diffusion/
Figure 1. The physics-driven diffusion model takes physics priors
and video input as conditions to synthesize high-fidelity impact
sound. Please also see the supplementary video and materials with
sample results.
ually perform laborious editing to fit the recorded sounds
to visual content. Though we could obtain a satisfactory
sound experience at the cinema, it is labor-intensive and
challenging to scale up the sound effects generation of vari-
ous complex physical interactions.
Recently, much progress has been made in automatic
sound synthesis, which can be divided into two main cate-
gories. The first category is physics-based modal synthesis
methods [37, 38, 50], which are often used for simulating
sounds triggered by various types of object interactions. Al-
though the synthesized sounds can reflect the differences
between various interactions and the geometry property of
the objects, such approaches require a sophisticated designed
environment to perform physics simulation and compute a
set of physics parameters for sound synthesis. It is, therefore,
impractical to scale up for a complicated scene because of
the time-consuming parameter selection procedure. On the
other hand, due to the availability of a significant amount
of impact sound videos in the wild, training deep learning
models for impact sound synthesis turns out to be a promis-
ing direction. Indeed, several works have shown promising
results in various audio-visual applications [63]. Unfortu-
nately, most existing video-driven neural sound synthesis
methods [7, 62] apply end-to-end black box model training
and lack of physics knowledge which plays a significant role
in modeling impact sound because a minor change in the
impact location could exert a significant difference in the
sound generation process. As a result, these methods are
prone to learning an average or smooth audio representa-
tion that contains artifacts, which usually leads to generating
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9749
unfaithful sound.
In this work, we aim to address the problem of auto-
matic impact sound synthesis from video input. The main
challenge for the learning-based approach is the weak cor-
respondence between visual and audio domains since the
impact sounds are sensitive to the undergoing physics. With-
out further physics knowledge, generating high-fidelity im-
pact sounds from videos alone is insufficient. Motivated
by physics-based sound synthesis methods using a set of
physics mode parameters to represent and re-synthesize im-
pact sounds, we design a physics prior that could contain
sufficient physics information to serve as a conditional sig-
nal to guide the deep generative model synthesizes impact
sounds from videos. However, since we could not perform
physics simulation on raw video data to acquire precise
physics parameters, we explored estimating and predicting
physics priors from sounds in videos. We found that such
physics priors significantly improve the quality of synthe-
sized impact sounds. For deep generative models, recent
successes in image generation such as DALL-E 2 and Ima-
gen [44] show that Denoising Diffusion Probabilistic Models
(DDPM) outperform GANs in terms of fidelity and diversity,
and its training process is usually with less instability and
mode collapse issues. While the idea of the denoising pro-
cess is naturally fitted with sound signals, it is unclear how
video input and physics priors could jointly condition the
DDPM and synthesize impact sounds.
To address all these challenges, we propose a novel sys-
tem for impact sound synthesis from videos. The system in-
cludes two main stages. In the first stage, we encode physics
knowledge of the sound using physics priors, including es-
timated physical parameters using signal processing tech-
niques and learned residual parameters interpreting the sound
environment via neural networks. In the second stage, we
formulate and design a DDPM model conditioned on visual
input and physics priors to generate a spectrogram of impact
sounds. Since the physics priors are extracted from the audio
samples, they become unavailable at the inference stage. To
solve this problem, we propose a novel inference pipeline to
use test video features to query a physics latent feature from
the training set as guidance to synthesize impact sounds on
unseen videos. Since the video input is unseen, we can still
generate novel impact sounds from the diffusion model even
if we reuse the training set’s physics knowledge. In summary,
our main contributions to this work are:
•We propose novel physics priors to provide physics
knowledge to impact sound synthesis, including estimated
physics parameters from raw audio and learned residual
parameters approximating the sound environment.
•We design a physics-driven diffusion model with different
training and inference pipeline for impact sound synthesis
from videos. To the best of our knowledge, we are the first
work to synthesize impact sounds from videos using thediffusion model.
•Our approach outperforms existing methods on both quan-
titative and qualitative metrics for impact sound synthe-
sis. The transparent and interpretable properties of physics
priors unlock the possibility of interesting sound editing
applications such as controllable impact sound synthesis.
|
Tang_You_Need_Multiple_Exiting_Dynamic_Early_Exiting_for_Accelerating_Unified_CVPR_2023 | Abstract
Large-scale Transformer models bring significant im-
provements for various downstream vision language tasks
with a unified architecture. The performance improvements
come with increasing model size, resulting in slow inference
speed and increased cost for severing. While some certain
predictions benefit from the full computation of the large-
scale model, not all of inputs need the same amount of com-
putation to conduct, potentially leading to computation re-
source waste. To handle this challenge, early exiting is pro-
posed to adaptively allocate computational power in term of
input complexity to improve inference efficiency. The exist-
ing early exiting strategies usually adopt output confidence
based on intermediate layers as a proxy of input complexity
to incur the decision of skipping following layers. How-
ever, such strategies cannot be applied to encoder in the
widely-used unified architecture with both encoder and de-
coder due to difficulty of output confidence estimation in the
encoder layers. It is suboptimal in term of saving compu-
tation power to ignore the early exiting in encoder compo-
nent. To address this issue, we propose a novel early exiting
strategy for unified vision language models, which allows
to dynamically skip the layers in encoder and decoder si-
multaneously in term of input layer-wise similarities with
multiple times of early exiting, namely MuE . By decompos-
ing the image and text modalities in the encoder, MuE is
flexible and can skip different layers in term of modalities,
advancing the inference efficiency while minimizing perfor-
mance drop. Experiments on the SNLI-VE and MS COCO
datasets show that the proposed approach MuE can reduce
expected inference time by up to 50% and 40% while main-
taining 99% and 96% performance respectively. | 1. Introduction
Recent advances in multi-modal Transformer-based
large-scale models [25, 28, 48, 57] bring improve-
ments across various vision language tasks. Among
the Transformer-based models, the unified sequence-to-
sequence architecture [38, 46] has attracted much attention
due to its potential to become a universal computation
engine to diverse tasks. Although large-scale models
have achieved unattainable performance, their expensive
computational cost usually hinders their applications in
real-time scenarios.
While the scaling effect suggests that the performance
of the model benefits from its increased size, not every in-
put requires the same amount of computation to achieve
similar performance. Such an observation is particularly
valid in visual language tasks, where inputs from different
modalities may require different amounts of computation.
Early exiting is a popular method to reduce computational
costs by adaptively skipping layers on top of the input while
preserving the general knowledge of the large-scale model.
Existing studies aim to deal with early exiting decisions
for encoder-only models [51, 52] or decoder components in
encoder-decoder architectures [8], but cannot induce early
exiting decisions for both components at the same time.
Considering that single-component strategies may be sub-
optimal in terms of saving computation cost, in this paper,
we investigate how to perform early exiting for both en-
coder and decoder components in a sequence-to-sequence
architecture to elucidate a new way to further improve in-
ference efficiency.
Given the varied complexity of the inputs, it is natural
to consider skipping some layers of the encoder as well
as the decoder. Current decision mechanisms use classi-
fiers to predict the output confidence of intermediate repre-
sentations and stop computation if the confidence reaches
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10781
-10%
-20%
-30% -40%-50%
5060708090Accuracy
MuE(Ours)
PABEE
DeeBERT
-10%
-20%
-30% -40%-50%
20507090110130150CIDEr
MuE(Ours)
DeeCap
PABEE
DeeBERTFigure 1. The performance of different early exiting methods on
SNLI-VE [49] and MS COCO [1] with certain expected inference
time reduction rates.
predefined threshold. However, extending this to unified
sequence-to-sequence model is non-trivial due to two main
challenges: (1) there are dependencies in making decisions
for exiting decisions in the encoder and decoder, and (2) it is
difficult to apply confidence classifiers to skip the encoder
layer before going through the decoder layer for task output.
To address the above challenges and to enable early exit-
ing of encoder and decoder in sequence-to-sequence frame-
work, we propose a novel early exiting strategies based on
layer-wise input similarity, which is different from existing
works based on task confidence [42]. More specifically, the
model is encouraged to skip following layers in both en-
coder and decoder when the layer-wise similarity reaches a
certain threshold.
This method is inspired by the saturation observa-
tion [11] which shows that the hidden-state of each Trans-
former layer arrives at a saturation status as going into deep
layers. For the vision-language tasks, we find that the sim-ilar observation regarding saturation is also valid as shown
in Figure 3. This observation lands the foundation that we
could make the exiting decision based on the intermediate
layer-wise input similarities without going through the fol-
lowing layers. Besides, since the computation needed for
input in different modalities usually varies, we propose a
modality decomposition mechanism, which could further
enable early fusion large-scale multi-modal models to break
tie between modalities and enjoy the flexible exiting deci-
sion over modalities. To encourage the early exiting behav-
ior with a minimal performance loss, we design a layer-wise
task loss, which enforce the each layer to output informative
features for final task. Figure 1 shows the results on SNLI-
VE dataset [49] and MS COCO [1] in term of expected time
reduction rate and task performance. We compare MuE
with several State-of-the-art early existing methods and ob-
serve that MuE is able to largely reduce inference time with
a minimal performance drop compared to other SoTA meth-
ods. Our main contributions are summarized as follows:
• To the best of our knowledge, this is a pioneering work
to extend early exiting choices to both encoder and de-
coder of sequence-to-sequence architecture. To this
end, we propose a novel early exiting strategy based
on layer-wise input similarity with the valid assump-
tion on saturation states in vision language models.
• Given the different characteristics of the modalities
and tasks, we decompose the modalities encoder for
early-fusion pre-trained models, bringing additional
improvements in terms of inference efficiency.
• We introduce layer-wise task loss, linking each layer
in the decoder to the final task, effectively helping to
maintain task performance when a significant time re-
duction is required.
• Extensive experiments show that our method can
largely reduce the inference time of vision language
models by up to 50% with minimal performance drop.
|
Tang_Contrastive_Grouping_With_Transformer_for_Referring_Image_Segmentation_CVPR_2023 | Abstract
Referring image segmentation aims to segment the tar-
get referent in an image conditioning on a natural language
expression. Existing one-stage methods employ per-pixel
classification frameworks, which attempt straightforwardly
to align vision and language at the pixel level, thus failing
to capture critical object-level information. In this paper,
we propose a mask classification framework, Contrastive
Grouping with Transformer network (CGFormer), which
explicitly captures object-level information via token-based
querying and grouping strategy. Specifically, CGFormer
first introduces learnable query tokens to represent objects
and then alternately queries linguistic features and groups
visual features into the query tokens for object-aware cross-
modal reasoning. In addition, CGFormer achieves cross-
level interaction by jointly updating the query tokens and
decoding masks in every two consecutive layers. Finally,
CGFormer cooperates contrastive learning to the grouping
strategy to identify the token and its mask corresponding
to the referent. Experimental results demonstrate that CG-
Former outperforms state-of-the-art methods in both seg-
mentation and generalization settings consistently and sig-
nificantly. Code is available at https://github.com/
Toneyaya/CGFormer .
| 1. Introduction
Referring Image Segmentation (RIS) aims to segment
the target referent in an image given a natural language ex-
pression [12, 17, 64]. It attracts increasing attention in the
research community and is expected to show its potential
in real applications, such as human-robot interaction via
natural language [50] and image editing [3]. Compared to
classical image segmentation that classifies pixels or masks
into a closed set of fixed categories, RIS requires locat-
ing referent at pixel level according to the free-form nat-
ural languages with open-world vocabularies. It faces chal-
†Sibei Yang is the corresponding author.
(a) CRIS/ReSTR
(b) VLT
(c) Our CGFormerKey
Reshape
Key
……
……
Mask Head
Visual Features
Linguistic Features
Pull CloserInitial Tokens
Query Vectors
Push Away
……
Encoder/DecoderVisual
Encoder
Language
Encoder“couch with
squares”
Decoder
CGFormer
Visual
Encoder
Language
Encoder
“couch with
squares”
“couch with
squares”
Visual
Encoder
Language
Encoder
……
Binary
Mask
Binary
Mask
Figure 1. Comparison of Transformer-based RIS methods and our
CGFormer. (a) CRIS [51] and ReSTR [25] fuse relevant linguistic
features into visual features, while (b) VLT [12] generates query
vectors to query visual features for segmentation. In contrast, (c)
our CGFormer introduces learnable query tokens and explicitly
groups visual features into tokens conditioning on language. Co-
operating the grouping strategy with contrastive learning, it iden-
tifies the token and its mask corresponding to the referent.
lenges of comprehensively understanding vision and lan-
guage modalities and aligning them at pixel level.
Existing works mainly follow the segmentation frame-
work of per-pixel classification [17, 34] integrated with
multi-modal fusion to address the challenges. They intro-
duce various fusion methods [19,20,26,32,46,63] to obtain
vision-language feature maps and predict the segmentation
results based on the feature maps. Recently, the improve-
ment of vision-language fusion for RIS mainly lies in utiliz-
ing the Transformer [12, 25, 51, 62]. LA VT [62] integrates
fusion module into the Transformer-based visual encoder.
CRIS [51] and ReSTR [25] fuse linguistic features into each
feature of the visual feature maps, as shown in Figure 1a.
In contrast, VLT [12] integrates the relevant visual features
into language-conditional query vectors via transformer de-
coder, as shown in Figure 1b.
Although these methods have improved the segmenta-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23570
tion accuracy, they still face several intrinsic limitations.
First, the works [25,51,62] based on pixel-level fusion only
model the pixel-level dependencies for each visual feature,
which fails to capture the crucial object/region-level infor-
mation. Therefore, they cannot accurately ground expres-
sions that require efficient cross-modal reasoning on ob-
jects. Second, although VLT [12]’s query vectors contain
object-level information after querying, it directly weights
and reshapes different tokens into one multi-modal feature
map for decoding the final segmentation mask. Therefore,
it loses the image’s crucial spatial priors (relative spatial ar-
rangement among pixels) in the reshaping process. More
importantly, it does not model the inherent differences be-
tween query vectors, resulting in that even though differ-
ent query vectors comprehend expressions in their own way,
they still focus on similar regions, but fail to focus on dif-
ferent regions and model their relations.
In this paper, we aim to propose a simple and effective
framework to address these limitations. Instead of using
per-pixel classification framework, we adopt an end-to-end
mask classification framework [1, 16] (see Figure 1c) to
explicitly capture object-level information and decode seg-
mentation masks for both the referent and other disturbing
objects/stuffs. Therefore, we can simplify RIS task by find-
ing the corresponding mask for the expression. Note that
our framework differs from two-stage RIS methods [53,64]
which require explicitly detecting the objects first and then
predicting the mask in the detected bounding boxes.
Specifically, we propose a Contrastive Grouping with
Transformer (CGFormer) network consisting of the Group
Transformer and Consecutive Decoder modules. The
Group Transformer aims to capture object-level informa-
tion and achieve object-aware cross-modal reasoning. The
success of applying query tokens in object detection [1, 69]
and instance segmentation [8, 16, 54] could be a potential
solution. However, it is non-trivial to apply them to RIS.
Without the annotation supervision of other mentioned ob-
jects other than the referent, it is hard to make tokens pay
attention to different objects and distinguish the token cor-
responding to the referent from other tokens. Therefore,
although we also specify query tokens as object-level in-
formation representations, we explicitly group the visual
feature map’s visual features into query tokens to ensure
that different tokens focus on different visual regions with-
out overlaps. Besides, we can further cooperate contrastive
learning with the grouping strategy to make the referent to-
ken attend to the referent-relevant information while forcing
other tokens to focus on different objects and background
regions, as shown in Figure 1c. In addition, we alternately
query the linguistic features and group the visual features
into the query tokens for cross-modal reasoning.
Furthermore, integrating and utilizing multi-level fea-
ture maps are crucial for accurate segmentation. Previousworks [32,40,63] fuse visual and linguistic features at mul-
tiple levels in parallel and late integrate them via ConvL-
STM [47] or FPNs [30]. However, their fusion modules are
solely responsible for cross-modal alignment at each level,
which fails to perform joint reasoning for multiple levels.
Therefore, we propose a Consecutive Decoder that jointly
updates query tokens and decodes masks in every two con-
secutive layers to achieve cross-level reasoning.
To evaluate the effectiveness of CGFormer, we conduct
experiments on three standard benchmarks, i.e., RefCOCO
series datasets [39,41,65]. In addition, unlike semantic seg-
mentation, RIS is not limited by the close-set classification
but to open-vocabulary alignment. It is necessary to evalu-
ate the generalization ability of RIS models. Therefore, we
introduce new subsets of training sets on the three datasets
to ensure the categories of referents in the test set are not
seen in the training stage, inspired by the zero-shot visual
grounding [44] and open-set object detection [67].
In summary, our main contributions are as follows,
• We propose a Group Transformer cooperated with con-
trastive learning to achieve object-aware cross-modal
reasoning by explicitly grouping visual features into
different regions and modeling their dependencies con-
ditioning on linguistic features.
• We propose a Consecutive Decoder to achieve cross-
level reasoning and segmentation by jointly perform-
ing the cross-modal inference and mask decoding in
every two consecutive layers in the decoder.
• We are the first to introduce an end-to-end mask clas-
sification framework, the Contrastive Grouping with
Transformer (CGFormer), for referring image segmen-
tation. Experimental results demonstrate that our CG-
Former outperforms all state-of-the-art methods on all
three benchmarks consistently.
• We introduce new splits on datasets for evaluating
generalization for referring image segmentation mod-
els. CGFormer shows stronger generalizability com-
pared to state-of-the-art methods thanks to object-
aware cross-modal reasoning via contrastive learning.
|
Tosi_NeRF-Supervised_Deep_Stereo_CVPR_2023 | Abstract
We introduce a novel framework for training deep stereo
networks effortlessly and without any ground-truth. By
leveraging state-of-the-art neural rendering solutions, we
generate stereo training data from image sequences col-
lected with a single handheld camera. On top of them, a
NeRF-supervised training procedure is carried out, from
which we exploit rendered stereo triplets to compensate for
occlusions and depth maps as proxy labels. This results in
stereo networks capable of predicting sharp and detailed
disparity maps. Experimental results show that models
trained under this regime yield a 30-40% improvement over
existing self-supervised methods on the challenging Middle-
bury dataset, filling the gap to supervised models and, most
times, outperforming them at zero-shot generalization.
| 1. Introduction
Depth from stereo is one of the longest-standing research
fields in computer vision [ 38]. It involves finding pixels
correspondences across two rectified images to obtain the
disparity – i.e., their difference in terms of horizontal coor-
dinates – and then use it to triangulate depth. After years
of studies with hand-crafted algorithms [ 56], deep learning
radically changed the way of approaching the problem [ 74].
End-to-end deep networks [ 50] rapidly became the domi-
nant solution for stereo, delivering outstanding results on
benchmarks [ 42,55,58] given sufficient training data.This latter requirement is the key factor for their suc-
cess, but it is also one of the greatest limitations. Annotated
data is hard to source when dealing with depth estimation
since additional sensors are required (e.g., LiDARs), and
thus represents a thick entry barrier to the field. Over the
years, two main trends have allowed to soften this problem:
self-supervised learning paradigms [ 3,23,76] and the use
of synthetic data [ 30,41,75]. Despite these advances, both
approaches still have weaknesses to address.
Self-supervised learning: despite the possibility to train
on any unlabeled stereo pair collected by any user – and
potentially opening to data democratization – the use of
self-supervised losses is ineffective at dealing with ill-posed
stereo settings (e.g. occlusions, non-Lambertian surfaces,
etc.). Albeit recent approaches soften the occlusions prob-
lem [ 3], predictions are far from being as sharp, detailed
and accurate as those obtained through supervised training.
Moreover, the self-supervised stereo literature [ 13,29,76]
often focuses on well-defined domains (i.e., KITTI) and
rarely exposes domain generalization capabilities [ 3].
Synthetic data: although training on densely annotated
synthetic images can guide the networks towards sharp, de-
tailed and accurate predictions, the domain-shift that occurs
when testing on real data dampens the full potential of the
trained model. A large body of recent literature addressing
zero-shot generalization [ 2,16,30,35,36,87] proves how
relevant the problem is. However, obtaining stereo pairs as
realistic as possible requires significant effort, despite syn-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
855
thetic depth labels being easily sourced through a graphics
rendering pipeline. Indeed, modelling high-quality assets
is crucial for mitigating the domain shift and requires ex-
cellent graphics skills. While artists make various assets
available, these are seldom open source and necessitate ad-
ditional human labor to be organized into plausible scenes.
In short, in a world where data is the new gold, obtain-
ing flexible and scalable training samples to unleash the full
potential of deep stereo networks still remains an open prob-
lem. In this paper, we propose a novel paradigm to ad-
dress this challenge. Given the recent advances in neural
rendering [ 44,45], we exploit them as data factories : we
collect sparse sets of images in-the-wild with a standard,
single handheld camera. After that, we train a Neural Ra-
diance Field (NeRF) model for each sequence and use it to
render arbitrary, novel views of the same scene. Specifi-
cally, we synthesize stereo pairs from arbitrary viewpoints
by rendering a reference view corresponding to the real ac-
quired image, and a target one on the right of it, displaced
by means of a virtual arbitrary baseline. This allows us to
generate countless samples to train any stereo network in
a self-supervised manner by leveraging popular photomet-
ric losses [ 23]. However, this na ¨ıve approach would inherit
the limitations of self-supervised methods [ 13,29,76] at oc-
clusions, which can be effectively addressed by rendering
athird view for each pair, placed on the left of the source
view specularly to the other target image. This allows to
compensate for the missing supervision at occluded regions.
Moreover, proxy-supervision in the form of rendered depth
by NeRF completes our NeRF-Supervised training regime.
With it, we can train deep stereo networks by conducting a
low-effort collection campaign, and yet obtain state-of-the-
art results without requiring any ground-truth label – or not
even a real stereo camera! – as shown on top of Fig. 1.
We believe that our approach is a significant step towards
democratizing training data. In fact, we will demonstrate
how the efforts of just the four authors were enough to col-
lect sufficient data (roughly 270 scenes) to allow our NeRF-
Supervised stereo networks to outperform models trained
on synthetic datasets, such as [ 2,16,30,35,36,87], as well
as existing self-supervised methods [ 3,79] in terms of zero-
shot generalization, as depicted at the bottom of Fig. 1.
We summarize our main contributions as:
• A novel paradigm for collecting and generating stereo
training data using neural rendering and a collection of
user-collected image sequences.
• A NeRF-Supervised training protocol that combines
rendered image triplets and depth maps to address oc-
clusions and enhance fine details.
• State-of-the art, zero-shot generalization results on
challenging stereo datasets [ 55], without exploiting
any ground-truth or real stereo pair. |
Tascon-Morales_Logical_Implications_for_Visual_Question_Answering_Consistency_CVPR_2023 | Abstract
Despite considerable recent progress in Visual Question
Answering (VQA) models, inconsistent or contradictory an-
swers continue to cast doubt on their true reasoning ca-
pabilities. However, most proposed methods use indirect
strategies or strong assumptions on pairs of questions and
answers to enforce model consistency. Instead, we propose
a novel strategy intended to improve model performance by
directly reducing logical inconsistencies. To do this, we in-
troduce a new consistency loss term that can be used by a
wide range of the VQA models and which relies on know-
ing the logical relation between pairs of questions and an-
swers. While such information is typically not available in
VQA datasets, we propose to infer these logical relations
using a dedicated language model and use these in our pro-
posed consistency loss function. We conduct extensive ex-
periments on the VQA Introspect and DME datasets and
show that our method brings improvements to state-of-the-
art VQA models while being robust across different archi-
tectures and settings.
| 1. Introduction
Visual Questioning Answering (VQA) models have
drawn recent interest in the computer vision community as
they allow text queries to question image content. This
has given way to a number of novel applications in the
space of model reasoning [8, 29, 54, 56], medical diagno-
sis [21,37,51,60] and counterfactual learning [1,2,11]. With
the ability to combine language and image information in a
common model, it is unsurprising to see a growing use of
VQA methods.
Despite this recent progress, however, a number of im-
portant challenges remain when making VQAs more pro-
ficient. For one, it remains extremely challenging to build
VQA datasets that are void of bias. Yet this is critical to
ensure subsequent models are not learning spurious cor-
relations or shortcuts [49]. This is particularly daunting
in applications where domain knowledge plays an impor-
tant role ( e.g., medicine [15, 27, 33]). Alternatively, ensur-
VQAIs it the middle of summer? Yes
Yes Is there snow on the ground?
VQAIs it the middle of summer?
It is the middle of summer There is no snow on the groundNo
Yes Is there snow on the ground?
RelationConventional VQA
Inconsistent
ConsistentOur MethodFigure 1. Top: Conventional VQA models tend to produce incon-
sistent answers as a consequence of not considering the relations
between question and answer pairs. Bottom: Our method learns
the logical relation between question and answer pairs to improve
consistency.
ing that responses of a VQA are coherent, or consistent , is
paramount as well. That is, VQA models that answer differ-
ently about similar content in a given image imply inconsis-
tencies in how the model interprets the inputs. A number of
recent methods have attempted to address this using logic-
based approaches [19], rephrashing [44], question gener-
ation [18, 40, 41] and regularizing using consistency con-
straints [47]. In this work, we follow this line of research
and look to yield more reliable VQA models.
We wish to ensure that VQA models are consistent in an-
swering questions about images. This implies that if multi-
ple questions are asked about the same image, the model’s
answers should not contradict themselves. For instance, if
one question about the image in Fig. 1 asks “Is there snow
on the ground?”, then the answer inferred should be consis-
tent with that of the question “Is it the middle of summer?”
As noted in [43], such question pairs involve reasoning and
perception, and consequentially lead the authors to define
inconsistency when the reasoning and perception questions
are answered correctly and incorrectly, respectively. Along
this line, [47] uses a similar definition of inconsistency to
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6725
regularize a VQA model meant to answer medical diagno-
sis questions that are hierarchical in nature. What is crit-
ical in both cases, however, is that the consistency of the
VQA model depends explicitly on its answers, as well as
the question and true answer. This hinges on the assump-
tion that perception questions are sufficient to answer rea-
soning questions. Yet, for any question pair, this may not be
the case. As such, the current definition of consistency (or
inconsistency) has been highly limited and does not truly
reflect how VQAs should behave.
To address the need to have self-consistent VQA mod-
els, we propose a novel training strategy that relies on logi-
cal relations. To do so, we re-frame question-answer (QA)
pairs as propositions and consider the relational construct
between pairs of propositions. This construct allows us to
properly categorize pairs of propositions in terms of their
logical relations. From this, we introduce a novel loss
function that explicitly leverages the logical relations be-
tween pairs of questions and answers in order to enforce
that VQA models be self-consistent. However, datasets typ-
ically do not contain relational information about QA pairs,
and collecting this would be extremely laborious and dif-
ficult. To overcome this, we propose to train a dedicated
language model that infers logical relations between propo-
sitions. Our experiments show that we can effectively in-
fer logical relations from propositions and use them in our
loss function to train VQA models that improve state-of-
the-art methods via consistency. We demonstrate this over
two different VQA datasets, against different consistency
methods, and with different VQA model architectures. Our
code and data are available at https://github.com/
sergiotasconmorales/imp_vqa .
|
Sun_Hierarchical_Semantic_Contrast_for_Scene-Aware_Video_Anomaly_Detection_CVPR_2023 | Abstract
Increasing scene-awareness is a key challenge in video
anomaly detection (VAD). In this work, we propose a hier-
archical semantic contrast (HSC) method to learn a scene-
aware VAD model from normal videos. We first incorporate
foreground object and background scene features with high-
level semantics by taking advantage of pre-trained video
parsing models. Then, building upon the autoencoder-
based reconstruction framework, we introduce both scene-
level and object-level contrastive learning to enforce the en-
coded latent features to be compact within the same seman-
tic classes while being separable across different classes.
This hierarchical semantic contrast strategy helps to deal
with the diversity of normal patterns and also increases
their discrimination ability. Moreover, for the sake of tack-
ling rare normal activities, we design a skeleton-based mo-
tion augmentation to increase samples and refine the model
further. Extensive experiments on three public datasets and
scene-dependent mixture datasets validate the effectiveness
of our proposed method.
| 1. Introduction
With the prevalence of surveillance cameras deployed
in public places, video anomaly detection (V AD) has at-
tracted considerable attention from both academia and in-
dustry. It aims to automatically detect abnormal events
so that the workload of human monitors can be greatly
reduced. By now, numerous V AD methods have been
developed under different supervision settings, including
weakly supervised [13, 50, 55, 58, 64, 76], purely unsu-
pervised [69, 72], and ones learning from normal videos
only [20, 24, 33, 44, 45]. However, it is extremely diffi-
cult or even impossible to collect sufficient and comprehen-
sive abnormal data due to the rare occurrence of anomalies,
whereas collecting abundant normal data is relatively easy.
Therefore, the setting of learning from normal data is more
*Corresponding author.
SceneAppearance/Motion
Hierarchical Semantic Contrast+++PushPullPushFigure 1. An illustration of hierarchical semantic contrast. The
encoded scene-appearance/motion features are gathered together
with respect to their semantic classes. Best viewed in color.
practical and plays the dominant role in past studies.
Although a majority of previous techniques learn their
V AD models from normal data, this task has still not
been well addressed due to the following reasons. First,
some anomalies are scene-dependent [46, 51], implying
that an appearance or motion may be anomalous in one
scene but normal in other scenes. How to detect scene-
dependent anomalies while preventing background bias ( i.e.
learning the background noise rather than the essence of
anomaly [31]) is a challenging problem. Second, normal
patterns are diverse. How to enable a deep V AD model
to represent the diverse normality well but not generalize
to anomalous data is also a challenge [18, 44]. Last but
not least, samples collected from different normal patterns
are imbalanced because some normal activities may appear
very sparsely [46]. How to deal with rare but normal activ-
ities is challenging as well.
Previous V AD methods mainly perform learning at
frame-level [20, 47, 75] or in an object-centric [17, 24, 78]
way. The former is prone to suffer from the background
bias [31] while most of the latter methods are background-
agnostic. There are some attempts to address the above-
mentioned challenges in one or another aspect. For in-
stance, a spatio-temporal context graph [51] and a hierar-
chical scene normality-binding model [1] are constructed to
discover scene-dependent anomalies. Memory-augmented
autoencoders (AE) [18,44] are designed to represent diverse
normal patterns while lessening the powerful capacity of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22846
AEs. An over-sampling strategy [32] is adopted but to solve
the imbalance between normal and abnormal data. Con-
trastively, in this work we address all of these challenges
simultaneously and in distinct ways.
The primary objective of our work is to handle scene-
dependent anomalies. An intuition behind scene-dependent
anomalies is that, if a type of object or activity is never
observed in one scene in normal videos, then it should be
viewed as an anomaly. It implies that we can first determine
the scene type and then check if an object or activity has oc-
curred in normal patterns of this scene. Based on this obser-
vation, we propose a hierarchical semantic contrast method
to learn a scene-aware V AD model. Taking advantage of
pre-trained video parsing networks, we group the appear-
ance and activity of objects and background scenes into se-
mantic categories. Then, building upon the autoencoder-
based reconstruction framework, we design both scene-
level and object-level contrastive learning to enforce the en-
coded latent features to gather together with respect to their
semantic categories, as shown in Fig. 1. When a test video is
input, we retrieve weighted normal features for reconstruc-
tion and the clips of high errors are detected as anomalies.
The contributions of this work are as follows:
• We build a scene-aware reconstruction framework
composed of scene-aware feature encoders and object-
centric feature decoders for anomaly detection. The
scene-aware encoders take background scenes into ac-
count while the object-centric decoders are to reduce
the background noise.
• We propose hierarchical semantic contrastive learn-
ing to regularize the encoded features in the latent
spaces, making normal features more compact within
the same semantic classes and separable between dif-
ferent classes. Consequently, it helps to discriminate
anomalies from normal patterns.
• We design a skeleton-based augmentation method to
generate both normal and abnormal samples based on
our scene-aware V AD framework. The augmented
samples enable us to additionally train a binary clas-
sifier that helps to boost the performance further.
• Experiments on three public datasets demonstrate
promising results on scene-independent V AD. More-
over, our method also shows a strong ability in detect-
ing scene-dependent anomalies on self-built datasets.
|
Song_Multi-Mode_Online_Knowledge_Distillation_for_Self-Supervised_Visual_Representation_Learning_CVPR_2023 | Abstract
Self-supervised learning (SSL) has made remarkable
progress in visual representation learning. Some studies
combine SSL with knowledge distillation (SSL-KD) to boost
the representation learning performance of small models. In
this study, we propose a Multi-mode Online Knowledge Dis-
tillation method (MOKD) to boost self-supervised visual rep-
resentation learning. Different from existing SSL-KD meth-
ods that transfer knowledge from a static pre-trained teacher
to a student, in MOKD, two different models learn collab-
oratively in a self-supervised manner. Specifically, MOKD
consists of two distillation modes: self-distillation and cross-
distillation modes. Among them, self-distillation performs
self-supervised learning for each model independently, while
cross-distillation realizes knowledge interaction between dif-
ferent models. In cross-distillation, a cross-attention feature
search strategy is proposed to enhance the semantic feature
alignment between different models. As a result, the two
models can absorb knowledge from each other to boost their
representation learning performance. Extensive experimen-
tal results on different backbones and datasets demonstrate
that two heterogeneous models can benefit from MOKD and
outperform their independently trained baseline. In addition,
MOKD also outperforms existing SSL-KD methods for both
the student and teacher models.
| 1. Introduction
Due to the promising performance of unsupervised visual
representation learning in many computer vision tasks, self-
supervised learning (SSL) has attracted widespread attention
from the computer vision community. SSL aims to learn
general representations that can be transferred to downstream
tasks by utilizing massive unlabeled data.
Among various SSL methods, contrastive learning [8, 21]
has shown significant progress in closing the performance
*Corresponding author.
1f1f/g99
Student1 Teacher1 EMA Self-
Distillation
2f/g992f
Teacher2 Student2 EMA Cross-
Distillation
Self-
Distillation Cross-
Distillation Figure 1. Overview of the proposed Multi-mode Online Knowledge
Distillation (MOKD). In MOKD, two different models are trained
collaboratively through two types of knowledge distillation modes,
i.e., a self-distillation mode and a cross-distillation mode. EMA
denotes exponential-moving-average.
gap with supervised methods in recent years. It aims at max-
imizing the similarity between views from the same instance
(positive pairs) while minimizing the similarity among views
from different instances (negative pairs). MoCo [10, 21] and
SimCLR [8, 9] use both positive and negative pairs for con-
trast. They significantly improve the performance compared
to previous methods [45, 51]. After that, many methods are
proposed to solve the limitations in contrastive learning, such
as the false negative problem [14, 20, 29, 32, 43], the limi-
tation of large batch size [31, 54], and the problem of hard
augmented samples [27, 49]. At the same time, other stud-
ies [2,4,5,11,17,18,55] abandon the negative samples during
contrastive learning. With relatively large models, such as
ResNet50 [23] or larger, these methods achieve comparable
performance on different tasks than their supervised coun-
terparts. However, as revealed in previous studies [15, 16],
they do not perform well on small models [26, 42] and have
a large gap from their supervised counterparts.
To address this challenge in contrastive learning, some
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11848
studies [1, 9, 15, 16, 37, 44, 58] propose to combine knowl-
edge distillation [24] with contrastive learning (SSL-KD) to
improve the performance of small models. These methods
first train a larger model in a self-supervised manner and
then distill the knowledge of the trained teacher model to a
smaller student model. There is a limitation in these SSL-
KD methods, i.e., knowledge is distilled to the student model
from the static teacher model in a unidirectional way. The
teacher model cannot absorb knowledge from the student
model to boost its performance.
In this study, we propose a Multi-mode Online Knowl-
edge Distillation method (MOKD), as illustrated in Fig. 1, to
boost the representation learning performance of two models
simultaneously. Different from existing SSL-KD methods
that transfer knowledge from a static pre-trained teacher to
a student, in MOKD, two different models learn collabora-
tively in a self-supervised manner. Specifically, MOKD con-
sists of a self-distillation mode and a cross-distillation mode.
Among them, self-distillation performs self-supervised learn-
ing for each model independently, while cross-distillation
realizes knowledge interaction between different models.
In addition, a cross-attention feature search strategy is pro-
posed in cross-distillation to enhance the semantic feature
alignment between different models. Extensive experimental
results on different backbones and datasets demonstrate that
model pairs can both benefit from MOKD and outperform
their independently trained baseline. For example, when
trained with ResNet [23] and ViT [13], two models can ab-
sorb knowledge from each other, and representations of the
two models show the characteristics of each other. In addi-
tion, MOKD also outperforms existing SSL-KD methods for
both the student and teacher models.
The contributions of this study are threefold:
•We propose a novel self-supervised online knowledge
distillation method, i.e., MOKD.
•MOKD can boost the performance of two models simul-
taneously, achieving state-of-the-art contrastive learn-
ing performance on different models.
•MOKD achieves state-of-the-art SSL-KD performance.
|
Tseng_Consistent_View_Synthesis_With_Pose-Guided_Diffusion_Models_CVPR_2023 | Abstract
Novel view synthesis from a single image has been a
cornerstone problem for many Virtual Reality applicationsthat provide immersive experiences. However , most exist-
ing techniques can only synthesize novel views within alimited range of camera motion or fail to generate consis-tent and high-quality novel views under significant cam-
era movement. In this work, we propose a pose-guideddiffusion model to generate a consistent long-term videoof novel views from a single image. We design an atten-
tion layer that uses epipolar lines as constraints to facili-tate the association between different viewpoints. Experi-
mental results on synthetic and real-world datasets demon-strate the effectiveness of the proposed diffusion modelagainst state-of-the-art transformer-based and GAN-based
approaches. More qualitative results are available athttps://poseguided-diffusion.github.io/ .
| 1. Introduction
Offering immersive 3D experiences from daily photos
has attracted considerable attention. It is a cornerstonetechnique for a wide range of applications such as 3Dphoto [ 18,49], 3D asset generation [ 35], and 3D scene nav-
igation [ 4]. Notably, rapid progress has been made in ad-
dressing the single-image view synthesis [40,50,61,69] is-
sue. Given an arbitrarily narrow field-of-view image, theseframeworks can produce high-quality images from novelviewpoints. However, these methods are limited to view-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16773
points that are within a small range of the camera motion.
The long-term single-image view synthesis task is re-
cently proposed to address the limitation of small cameramotion range. As demonstrated in Figure 1, the task at-
tempts to generate a video from a single image and a se-
quence of camera poses. Note that different from the single-
image view synthesis problem, the viewpoints of the lastfew video frames produced under this setting may be faraway from the original viewpoint. Take the results shownin Figure 1, for instance, the cameras are moving into dif-
ferent rooms that were not observed in the input images.
Generating long-term view synthesis results from a sin-
gle image is challenging for two main reasons. First, dueto the large range of the camera motion, e.g., moving intoa new room, a massive amount of new content needs to be
hallucinated for the regions that are not observed in the in-
put image. Second, the view synthesis results should beconsistent across viewpoints, particularly in the regions ob-
served in the input viewpoint or previously hallucinated inthe other views.
Both explicit- and implicit-based solutions are proposed
to handle these issues. Explicit-based approaches [ 17,24,
25,40] use a “warp and refine” strategy. Specifically, the
image is first warped from the input to novel viewpointsaccording to some 3D priors, i.e., monocular depth estima-tion [ 37,38]. Then a transformer or GAN-based generative
model is designed to refine the warped image. However,the success of the explicit-based schemes hinges on the ac-curacy of the monocular depth estimation. To address thislimitation, Rombach et al .[42] designed a geometry-free
transformer to implicitly learn the 3D correspondences be-tween the input and output viewpoints. Although reason-able new content is generated, the method fails to producecoherent results across viewpoints. The LoR [ 39] frame-
work leverages the auto-regressive transformer to furtherimprove the consistency. Nevertheless, generating consis-tent, high-quality long-term view synthesis results remainschallenging.
In this paper, we propose a framework based on dif-
fusion models for consistent and realistic long-term novelview synthesis. Diffusion models [ 14,52,54] have achieved
impressive performance on many content creation applica-
tions, such as image-to-image translation [ 44] and text-to-
image generation [ 2,36,45]. However, these methods only
work on 2D images and lack 3D controllability. To thisend, we develop a pose-guided diffusion model with the
epipolar attention layers. Specifically, in the UNet [ 43]
network of the proposed diffusion model, we design theepipolar attention layer to associate the input view and out-put view features. According to the camera pose informa-tion, we estimate the epipolar line on the input view featuremap for each pixel on the output view feature map. Sincethese epipolar lines indicate the candidate correspondences,we use the lines as the constraint to compute the attention
weight between the input and output views.
We conduct extensive quantitative and qualitative stud-
ies on real-world Realestate10K [ 76] and synthetic Mat-
terport3D [ 7] datasets to evaluate the proposed approach.
With the epipolar attention layer, our pose-guided diffusionmodel is capable of synthesizing long-term novel views that1) have realistic new content in unseen regions and 2) are
consistent with the other viewpoints. We summarize thecontributions as follows:
• We propose a pose-guided diffusion model for the
long-term single-image view synthesis task.
• We consider the epipolar line as the constraint and de-
sign an epipolar attention to associate pixels in the im-ages at input and output views for the UNet network inthe diffusion model.
• We validate that the proposed method synthesizes re-
alistic and consistent long-term view synthesis results
on the Realestate10K and Matterport3D datasets.
|
Tang_Graph_Transformer_GANs_for_Graph-Constrained_House_Generation_CVPR_2023 | Abstract
We present a novel graph Transformer generative adver-
sarial network (GTGAN) to learn effective graph node re-
lations in an end-to-end fashion for the challenging graph-
constrained house generation task. The proposed graph-
Transformer-based generator includes a novel graph Trans-
former encoder that combines graph convolutions and self-
attentions in a Transformer to model both local and global
interactions across connected and non-connected graph
nodes. Specifically, the proposed connected node atten-
tion (CNA) and non-connected node attention (NNA) aim
to capture the global relations across connected nodes and
non-connected nodes in the input graph, respectively. The
proposed graph modeling block (GMB) aims to exploit local
vertex interactions based on a house layout topology. More-
over, we propose a new node classification-based discrim-
inator to preserve the high-level semantic and discrimina-
tive node features for different house components. Finally,
we propose a novel graph-based cycle-consistency loss that
aims at maintaining the relative spatial relationships be-
tween ground truth and predicted graphs. Experiments on
two challenging graph-constrained house generation tasks
(i.e., house layout and roof generation) with two public
datasets demonstrate the effectiveness of GTGAN in terms
of objective quantitative scores and subjective visual real-
ism. New state-of-the-art results are established by large
margins on both tasks.
| 1. Introduction
This paper focuses on converting an input graph to a re-
alistic house footprint, as depicted in Figure 1. Existing
house generation methods such as [2, 16, 20, 28, 32, 45, 47],
typically rely on building convolutional layers. However,
convolutional architectures lack an understanding of long-
range dependencies in the input graph since inherent in-
ductive biases exist. Several Transformer architectures
[3, 6, 11, 17, 18, 24, 43, 44, 46, 54, 55] based on the self-
attention mechanism have recently been proposed to encodelong-range or global relations, thus learn highly expressive
feature representations. On the other hand, graph convolu-
tion networks are good at exploiting local and neighborhood
vertex correlations based on a graph topology. Therefore,
it stands to reason to combine graph convolution networks
and Transformers to model local as well as global interac-
tions for solving graph-constrained house generation.
To this end, we propose a novel graph Transformer gen-
erative adversarial network (GTGAN), which consists of
two main novel components, i.e., a graph Transformer-
based generator and a node classification-based discrimi-
nator (see Figure 1). The proposed generator aims to gen-
erate a realistic house from the input graph, which consists
of three components, i.e., a convolutional message passing
neural network (Conv-MPN), a graph Transformer encoder
(GTE), and a generation head. Specifically, Conv-MPN first
receives graph nodes as inputs and aims to extract discrim-
inative node features. Next, the embedded nodes are fed to
GTE, in which the long-range and global relation reasoning
is performed by the connected node attention (CNA) and
non-connected node attention (NNA) modules. Then, the
output from both attention modules is fed to the proposed
graph modeling block (GMB) to capture local and neigh-
borhood relationships based on a house layout topology. Fi-
nally, the output of GTE is fed to the generative head to pro-
duce the corresponding house layout or roof. To the best of
our knowledge, we are the first to use a graph Transformer
to model local and global relations across graph nodes for
solving graph-constrained house generation.
In addition, the proposed discriminator aims to distin-
guish real and fake house layouts, which ensures that our
generated house layouts or roofs look realistic. At the same
time, the discriminator classifies the generated rooms to
their corresponding real labels, preserving the discrimina-
tive and semantic features ( e.g., size and position) for differ-
ent house components. To maintain the graph-level layout,
we also propose a novel graph-based cycle-consistency loss
to preserve the relative spatial relationships between ground
truth and predicted graphs.
Overall, our contributions are summarized as follows:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2173
Figure 1. Overview of the proposed GTGAN on house layout generation. It consists of a novel graph Transformer-based generator Gand
a novel node classification-based discriminator D. The generator takes graph nodes as input and aims to capture local and global relations
across connected and non-connected nodes using the proposed graph modeling block and multi-head node attention, respectively. Note that
we do not use position embeddings since our goal is to predict positional node information in the generated house layout. The discriminator
Daims to distinguish real and generated layouts and simultaneously classify the generated house layouts to their corresponding room
types. The graph-based cycle-consistency loss aligns the relative spatial relationships between ground truth and predicted nodes. The
whole framework is trained in an end-to-end fashion so that all components benefit from each other.
• We propose a novel Transformer-based network ( i.e., GT-
GAN) for the challenging graph-constrained house gener-
ation task. To the best of our knowledge, GTGAN is the
first Transformer-based framework, enabling more effec-
tive relation reasoning for composing house layouts and
validating adjacency constraints.
• We propose a novel graph Transformer generator that
combines both graph convolutional networks and Trans-
formers to explicitly model global and local correlations
across both connected and non-connected nodes simul-
taneously. We also propose a new node classification-
based discriminator to preserve high-level semantic and
discriminative features for different types of rooms.
• We propose a novel graph-based cycle-consistency loss to
guide the learning process toward accurate relative spatial
distance of graph nodes.
• Qualitative and quantitative experiments on two challeng-
ing graph-constrained house generation tasks ( i.e., house
layout generation and house roof generation) with two
datasets demonstrate that GTGAN can generate better
house structures than state-of-the-art methods, such as
HouseGAN [28] and RoofGAN [32].
|
Spratley_Unicode_Analogies_An_Anti-Objectivist_Visual_Reasoning_Challenge_CVPR_2023 | Abstract
Analogical reasoning enables agents to extract relevant
information from scenes, and efficiently navigate them in
familiar ways. While progressive-matrix problems (PMPs)
are becoming popular for the development and evaluation
of analogical reasoning in computer vision, we argue that
the dominant methodology in this area struggles to expose
the lack of meaningful generalisation in solvers, and rein-
forces an objectivist stance on perception – that objects can
only be seen one way – which we believe to be counter-
productive. In this paper, we introduce the Unicode Analo-
gies challenge, consisting of polysemic, character-based
PMPs to benchmark fluid conceptualisation ability in vision
systems. Writing systems have evolved characters at mul-
tiple levels of abstraction, from iconic through to symbolic
representations, producing both visually interrelated yet ex-
ceptionally diverse images when compared to those exhib-
ited by existing PMP datasets. Our framework has been de-
signed to challenge models by presenting tasks much harder
to complete without robust feature extraction, while remain-
ing largely solvable by human participants. We therefore
argue that Unicode Analogies elegantly captures and tests
for a facet of human visual reasoning that is severely lack-
ing in current-generation AI.
| 1. Introduction
Traditionally, statistical classification models have been
designed to neatly cleave data into categories. Even in
tasks such as visual scene decomposition, where data re-
sists full description by any one label, there is an underly-
ing objectivist assumption being made; the expectation of
there being an objective number of distinguishable “things”
present, themselves belonging to singular classes. Human
visual perception makes a departure from this. The sym-
bolic world to which we attend, with firm compositional
rules for scenes and their objects, and with their parts and
positions, is subsisted by a churning sea of ongoing concep-
tualisation processes deeply fluid and contextual [15].
In recent years, there has been a proliferation of com-
puter vision architectures built with object-centric inductive
Figure 1. An example problem in UA, instantiating the Distribute-
Three rule with the Closure concept. Five out of six context frames
are provided (left), with four answer frames to choose from (right).
The correct answer is emboldened.
biases [21], many of which represent states-of-the-art on
popular datasets [7, 35, 41]. This is an important direction,
as training models to decompose scenes into objects allows
for an explicit abstraction stage promoting feature reuse.
However, abstract visual reasoning tasks such as Bongard
problems [3] expose philosophically [20] — and in this pa-
per, experimentally — that such an approach might work
against the creation of models that possess the ability to ab-
stract and deploy useful concepts. This observation also en-
gages a current debate in the literature regarding the scala-
bility of built-in knowledge and inductive biases [24, 36].
Humans display flexibility in how they decompose
scenes, and perceive such scenes at a level of abstraction
informed by past experiences and appropriate to present
goals [8,9]. Scene understanding in humans is therefore un-
dergirded by something other than the perception of static
objects [22], and the idea that scene modelling research can
separate perception and higher cognition into a pipeline of
self-contained modules is strongly critiqued [5].
Noticing other shortcomings of deep-learnt approaches
to computer vision, including brittleness to out-of-
distribution (OOD) data, a small number of abstraction
datasets inspired by Raven’s Progressive Matrices have
been recently released [23]. Further motivations to this
direction include a) the expectation that tasks with such
an extended history in general psychometric testing would
be useful to import into computer vision research, and b)
the opinion that the more broadly applicable a model’s ab-
stracted concepts become, the more robust that model will
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19082
be under OOD conditions [29, 35]. While the applicabil-
ity of such concepts should ideally be evaluated by these
datasets, common approaches to dataset creation feature
conceptual schemas consisting of simple objects that can
be neatly dropped into scenes, and extracted by scene de-
composition stages [35]. This seems to require little in the
way of contextual perception, such as Hofstadter’s notion
of “conceptual slippage” [16].
We observe that the world’s writing systems present a
diverse resource of characters that are amenable to con-
tent analysis, and can assemble novel reasoning problems
of their own. We introduce the Unicode Analogies (UA)
challenge, consisting of character-based progressive ma-
trix problems (PMPs) to benchmark fluid conceptualisation
ability in vision systems. The characters in UA are poly-
semic, and may instantiate any number of concepts, with the
salient concept only revealing itself given context (Fig. 1).
By generating training and testing problems from disjoint
sets of characters, we challenge these systems by present-
ing tasks much harder to complete without robust feature
extraction, while remaining largely solvable by human par-
ticipants. In doing so, we contribute a dataset that unlike
others in this area, operates on a rich conceptual schema
that invites fine-grained experimentation, and is easily ex-
tensible to new user-defined concepts. Over five key ex-
periments, we explore human and model performance on
a number of dataset splits generated by UA, demonstrate
that state-of-the-art solvers are still far from achieving the
founding goals their datasets were created for, and encour-
age new solvers to overcome these limitations.
|
Somepalli_Diffusion_Art_or_Digital_Forgery_Investigating_Data_Replication_in_Diffusion_CVPR_2023 | Abstract
Cutting-edge diffusion models produce images with high
quality and customizability, enabling them to be used for
commercial art and graphic design purposes. But do diffu-
sion models create unique works of art, or are they repli-
cating content directly from their training sets? In this
work, we study image retrieval frameworks that enable us
to compare generated images with training samples and de-
tect when content has been replicated. Applying our frame-
works to diffusion models trained on multiple datasets in-
cluding Oxford flowers, Celeb-A, ImageNet, and LAION, we
discuss how factors such as training set size impact rates of
content replication. We also identify cases where diffusion
models, including the popular Stable Diffusion model, bla-
tantly copy from their training data. Project page: https:
//somepago.github.io/diffrep.html | 1. Introduction
The rapid rise of diffusion models has led to new gen-
erative tools with the potential to be used for commer-
cial art and graphic design. The power of the diffusion
paradigm stems in large part from its reliance on simple de-
noising networks that maintain their stability when trained
on huge web-scale datasets containing billions of image-
caption pairs. These mega-datasets have the power to forge
commercial models like DALL ·E[54] and Stable Diffusion
[56], but also bring with them a number of legal and ethical
risks [ 7]. Because these datasets are too large for careful hu-
man curation, the origins and intellectual property rights of
the data sources are largely unknown. This fact, combined
with the ability of large models to memorize their training
data [ 9,10,22], raises questions about the originality of dif-
fusion outputs. There is a risk that diffusion models might,
without notice, reproduce data from the training set directly,
or present a collage of multiple training images.
We informally refer to the reproduction of training
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6048
images, either in part or in whole, as content replication .
In principle, replicating partial or complete information
from the training data has implications for the ethical and
legal use of diffusion models in terms of attributions to
artists and photographers. Replicants are either a benefit or
a hazard; there may be situations where content replication
is acceptable, desirable, or fair use, and others where it is
“stealing.” While these ethical boundaries are unclear at
this time, we focus on the scientific question of whether
replication actually happens with modern state-of-the-art
diffusion models, and to what degree .
Our contributions are as follows. We begin with a study
of how to detect content replication, and we consider a
range of image similarity metrics developed in the self-
supervised learning and image retrieval communities. We
benchmark the performance of different image feature ex-
tractors using real and purpose-built synthetic datasets and
show that state-of-the-art instance retrieval models work
well for this task. Armed with new and existing tools, we
search for data replication behavior in a range of diffusion
models with different dataset properties. We show that for
small and medium dataset sizes, replication happens fre-
quently, while for a model trained on the large and diverse
ImageNet dataset, replication seems undetectable.
This latter finding may lead one to believe that replica-
tion is not a problem for large-scale models. However, the
even larger Stable Diffusion model exhibits clear replica-
tion in various forms (Fig 1). Furthermore, we believe that
the rate of content replication we identify in Stable Diffu-
sionlikely underestimates the true rate because the model is
trained on a 2B image split of LAION, but we only search
for matches in the smaller 12M “Aesthetics v2 6+” subset.
The level of image similarity required for something to
count as “replication” is subjective and may depend on both
the amount of diversity within the image’s class as well as
the observer. Some replication behaviors we uncover are
unambiguous, while in other instances they fall into a gray
area. Rather than choosing an arbitrary definition, we fo-
cus on presenting quantitative and qualitative results to the
reader, leaving each person to draw their own conclusions
based on their role and stake in the process of generative AI.
|
Tukra_Improving_Visual_Representation_Learning_Through_Perceptual_Understanding_CVPR_2023 | Abstract
We present an extension to masked autoencoders (MAE)
which improves on the representations learnt by the model
by explicitly encouraging the learning of higher scene-level
features. We do this by: (i) the introduction of a perceptual
similarity term between generated and real images (ii) in-
corporating several techniques from the adversarial train-
ing literature including multi-scale training and adaptive
discriminator augmentation. The combination of these re-
sults in not only better pixel reconstruction but also repre-
sentations which appear to capture better higher-level de-
tails within images. More consequentially, we show how
our method, Perceptual MAE, leads to better performance
when used for downstream tasks outperforming previous
methods. We achieve 78.1%top-1 accuracy linear probing
on ImageNet-1K and up to 88.1%when fine-tuning, with
similar results for other downstream tasks, all without use
of additional pre-trained models or data.
| 1. Introduction
Self-supervision provides a powerful framework for
training deep neural networks without relying on explicit
supervision or labels where learning proceeds by predicting
one part of the input data from another. Approaches based
on denoising autoencoders [45], where the input is masked
and the missing parts reconstructed, have shown to be ef-
fective for pre-training in NLP with BERT [8], and more
recently similar techniques have been applied for learning
visual representations from images [1,4,17,30]. Such meth-
ods effectively use image reconstruction as a pretext task on
the basis that by learning to predict missing patches useful
representations can be learnt for downstream tasks.
One challenge when applying such techniques to images
is that, unlike language where words contain some level of
semantic meaning by design, the pixels in images are natu-
ral signals containing high-frequency variations. Therefore,
image-based denoising autoencoders have been adapted to
avoid learning trivial solutions to reconstruction based on
local textures or patterns. BEiT [1] uses an intermediarycodebook of patches such that pixels are not reconstructed
directly, whilst MAE [17] masks a high proportion of the
image to force the model to learn how to reconstruct whole
scenes with limited context.
In this paper, we build upon MAE and ask how we can
move beyond the implicit conditioning of high masking ra-
tios to explicitly incorporate the learning of higher-order
‘semantic’ features into the learning objective. To do this,
we focus on introducing scene-level information by adding
a perceptual loss term [22]. This works by constraining
feature map similarity with a second pre-trained network,
a technique which has been shown empirically in the gener-
ative modelling literature to improve perceptual reconstruc-
tion quality [54]. In addition, this also provides a mecha-
nism to incorporate relevant scene-level cues contained in
the second network (which could be e.g.a strong ImageNet
classifier or a pre-trained language-vision embedding).
One of the benefits of MAE is that it can rapidly learn
strong representations using only self-supervision from the
images in the target pre-training set. To maintain this prop-
erty, we introduce a second idea: tying the features not with
a separate network, but with an adversarial discriminator
trained in parallel to distinguish between real and generated
images. Both ideas combined result in not only a lower re-
construction error, but also learnt representations which bet-
ter capture details of the scene layout and object boundaries
(see Figure 3) without either explicit supervision or the use
of hand-engineered inductive biases.
Finally, we build on these results and show that tech-
niques from the generative modelling literature such as
multi-scale gradients [24] and adaptive discriminator aug-
mentation [25] can lead to further improvements in the
learnt representation, and this also translates into a further
boost in performance across downstream tasks. We hypoth-
esise that the issues that these methods were designed to
overcome, such as mode collapse during adversarial train-
ing and incomplete learning of the underlying data distribu-
tion, are related to overfitting on low-level image features.
Our contributions can be summarized as follows: (i) we
introduce a simple and self-contained technique to improve
the representations learnt by masked image modelling based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14486
on perceptual loss and adversarial learning, (ii) we per-
form a rigorous evaluation of this method and variants, and
set new state-of-the-art for masked image modelling with-
out additional data for classification on ImageNet-1K, ob-
ject detection on MS COCO and semantic segmentation on
ADE20K, (iii) we demonstrate this approach can further
draw on powerful pre-trained models when available, re-
sulting in a further boost in performance and (iv) we show
our approach leads qualitatively to more ‘object-centric’
representations and stronger performance with frozen fea-
tures (in the linear probe setting) compared to MAE.
|
Sui_ScanDMM_A_Deep_Markov_Model_of_Scanpath_Prediction_for_360deg_CVPR_2023 | Abstract
Scanpath prediction for 360◦images aims to produce dy-
namic gaze behaviors based on the human visual perception
mechanism. Most existing scanpath prediction methods for
360◦images do not give a complete treatment of the time-
dependency when predicting human scanpath, resulting in
inferior performance and poor generalizability. In this pa-
per, we present a scanpath prediction method for 360◦im-
ages by designing a novel Deep Markov Model (DMM)
architecture, namely ScanDMM. We propose a semantics-
guided transition function to learn the nonlinear dynam-
ics of time-dependent attentional landscape. Moreover, a
state initialization strategy is proposed by considering the
starting point of viewing, enabling the model to learn the
dynamics with the correct “launcher”. We further demon-
strate that our model achieves state-of-the-art performance
on four 360◦image databases, and exhibit its generalizabil-
ity by presenting two applications of applying scanpath pre-
diction models to other visual tasks – saliency detection and
image quality assessment, expecting to provide profound in-
sights into these fields.
| 1. Introduction
360◦images, also referred to as omnidirectional, sphere
or virtual reality (VR) images, have been a popular type of
visual data in many applications, providing us with immer-
sive experiences. Nevertheless, how people explore virtual
environments in 360◦images has not been well understood.
The scanpath prediction model that aims at generating real-
istic gaze trajectories has obtained increasing attention due
to its significant influence in understanding users’ viewing
behaviors in VR scenes, as well as in developing VR ren-
dering, display, compression, and transmission [51].
Scanpath prediction has been explored for many years in
2D images [29]. However, 360◦images are different greatly
from 2D images, as a larger space is offered to interact with
*Corresponding author (email: [email protected])
Saliency -based models
Generative models
Noise
Our model
𝐳𝟎𝐳𝟏𝐳𝟐
𝐱𝟏𝐱𝟐···𝐒𝐒
𝐒
Semantics
Image
GTFigure 1. Existing scanpath prediction models for 360◦images
could be classified to two types: saliency-based models [2, 70, 71]
and generative models [42, 43]. The scanpaths produced by
saliency-based models, taking the study [2] as an example, com-
monly exhibits unstable behavior with large displacements and
scarce focal regions. Generative models, taking the study [43] as
an example, shows less attention to regions of interest. The pro-
posed ScanDMM can produce more realistic scanpaths that focus
on regions of interests.
– humans are allowed to use both head and gaze move-
ments to explore viewports of interest in the scene. In such a
case, viewing conditions, e.g., the starting point of viewing,
has an important impact on humans’ scanpaths [20, 21, 52],
and leads to complex and varied scanpaths among humans.
This is inherently different from what happens in 2D visu-
als since humans can directly guide their attention to the
regions of interest. Therefore, scanpath prediction for 360◦
images is a more complex task.
Current 360◦image scanpath prediction methods could
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6989
be roughly divided into two categories: saliency-based [2,
70, 71] and generative methods [42, 43]. The basic idea of
the former one is to sample predicted gaze points from a
saliency map. The performance of such methods is highly
dependent on that of the saliency maps. Furthermore, con-
structing a satisfactory sampling strategy to account for
time-dependent visual behavior is non-trivial – the results of
SaltiNet [2] exhibit unstable behavior with large displace-
ments and scarce focal regions (see Fig. 1). The latter group
of methods utilizes the advance of generative models, e.g.,
Generative Adversarial Network (GAN), to predict realis-
tic scanpaths. However, such methods show less attention
to regions of interests (see Fig. 1). In addition, the GAN-
based methods are less flexible in determining the length of
scanpaths and commonly suffer from unstable training.
None of above-mentioned studies give a complete treat-
ment of the time-dependency of viewing behavior, which
is critical for modeling dynamic gaze behaviors in 360◦im-
ages. For time-series data, a popular approach is to leverage
sequential models, e.g., recurrent neural networks (RNNs),
as exemplified in gaze prediction for 360◦videos [17, 35,
45]. However, such deterministic models are prone to over-
fitting, particularly on small 360◦databases. More impor-
tantly, they typically make simplistic assumptions, e.g., one
choice is to concatenate the saliency map to the model’s hid-
den states [17, 45], which assumes that the network learns
how the states evolve by learning from saliency maps. Nev-
ertheless, the neuroscience research [62] reveals that in ad-
dition to top-down and bottom-up features, prior history and
scene semantics are essential sources for guiding visual at-
tention. Moreover, to be identified as interests or rejected
as distractors, items must be compared to target templates
held in memory [62]. Inspired by this, we argue that hu-
mans’ scanpaths in 360◦scenes are complex nonlinear dy-
namic attentional landscapes over time as a function of in-
terventions of scene semantics on visual working memory.
We present a probabilistic approach to learning the visual
states that encode the time-dependent attentional landscape
by specifying how these states evolve under the guidance of
scene semantics and visual working memory. We instanti-
ate our approach in the Deep Markov Model (DMM) [28],
namely ScanDMM. Our contributions can be summarized
as follows:
• We present a novel method for time-dependent vi-
sual attention modeling for 360◦images. Specifically,
we model the mechanism of visual working memory
by maintaining and updating the visual state in the
Markov chain. Furthermore, a semantics-guided tran-
sition function is built to learn the nonlinear dynamics
of the states, in which we model the interventions of
scene semantics on visual working memory.
• We propose a practical strategy to initialize the visual
state, facilitating our model to focus on learning thedynamics of states with correct “launcher”, as well as
enabling us to assign a specific starting point for scan-
path generation. Moreover, ScanDMM is capable of
producing 1,000variable-length scanpaths within one
second, which is critical for real-world applications.
• We apply the proposed ScanDMM to two other com-
puter vision tasks – saliency detection and image qual-
ity assessment, which demonstrates our model equips
with strong generalizability and is expected to provide
insights into other vision tasks.
|
Sun_Masked_Motion_Encoding_for_Self-Supervised_Video_Representation_Learning_CVPR_2023 | Abstract
How to learn discriminative video representation from
unlabeled videos is challenging but crucial for video anal-
ysis. The latest attempts seek to learn a representation
model by predicting the appearance contents in the masked
regions. However, simply masking and recovering ap-
pearance contents may not be sufficient to model tempo-
ral clues as the appearance contents can be easily recon-
structed from a single frame. To overcome this limitation,
we present Masked Motion Encoding (MME), a new pre-
training paradigm that reconstructs both appearance and
motion information to explore temporal clues. In MME,
we focus on addressing two critical challenges to improve
the representation performance: 1) how to well represent
the possible long-term motion across multiple frames; and
2) how to obtain fine-grained temporal clues from sparsely
sampled videos. Motivated by the fact that human is able to
recognize an action by tracking objects’ position changes
and shape changes, we propose to reconstruct a motion
trajectory that represents these two kinds of change in the
masked regions. Besides, given the sparse video input, we
enforce the model to reconstruct dense motion trajectories
in both spatial and temporal dimensions. Pre-trained with
our MME paradigm, the model is able to anticipate long-
term and fine-grained motion details. Code is available at
https://github.com/XinyuSun/MME .
| 1. Introduction
Video representation learning plays a critical role in
video analysis like action recognition [15,32,79], action lo-
calization [12,81], video retrieval [4,82], videoQA [40], etc.
Learning video representation is very difficult for two rea-
sons. Firstly, it is extremely difficult and labor-intensive to
annotate videos, and thus relying on annotated data to learn
*Equal contribution. Email: {csxinyusun, phchencs }@gmail.com
†Corresponding author. Email: [email protected] representations is not scalable. Also, the complex
spatial-temporal contents with a large data volume are diffi-
cult to be represented simultaneously. How to perform self-
supervised videos representation learning only using unla-
beled videos has been a prominent research topic [7,13,49].
Taking advantage of spatial-temporal modeling using a
flexible attention mechanism, vision transformers [3, 8, 25,
26, 53] have shown their superiority in representing video.
Prior works [5, 37, 84] have successfully introduced the
mask-and-predict scheme in NLP [9,23] to pre-train an im-
age transformer. These methods vary in different recon-
struction objectives, including raw RGB pixels [37], hand-
crafted local patterns [75], and VQ-V AE embedding [5], all
above are static appearance information in images. Based
on previous successes, some researchers [64,72,75] attempt
to extend this scheme to the video domain, where they mask
3D video regions and reconstruct appearance information.
However, these methods suffers from two limitations. First ,
as the appearance information can be well reconstructed in
a single image with an extremely high masking ratio (85%
in MAE [37]), it is also feasible to be reconstructed in the
tube-masked video frame-by-frame and neglect to learn im-
portant temporal clues. This can be proved by our ablation
study (cf. Section 4.2.1). Second , existing works [64, 75]
often sample frames sparsely with a fixed stride, and then
mask some regions in these sampled frames. The recon-
struction objectives only contain information in the sparsely
sampled frames, and thus are hard to provide supervision
signals for learning fine-grained motion details, which is
critical to distinguish different actions [3, 8].
In this paper, we aim to design a new mask-and-predict
paradigm to tackle these two issues. Fig. 1(a) shows two key
factors to model an action, i.e., position change and shape
change. By observing the position change of the person, we
realize he is jumping in the air, and by observing the shape
change that his head falls back and then tucks to his chest,
we are aware that he is adjusting his posture to cross the bar.
We believe that anticipating these changes helps the model
better understand an action.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2235
𝑡ℎ
𝑤
(b) Appearance reconstruction vs. motion trajectory reconstruction.Appearance
reconstructionMotion trajectory
reconstruction𝑡0 𝑡0𝑡2
ViT-based
AutoencoderInput video
(a) Two key factors to recognize a high jump action .Position change Shape change𝑓𝑒𝑛𝑐𝑓𝑑𝑒𝑐
𝑡ℎ
𝑤
𝑡1
Figure 1. Illustration of motion trajectory reconstruction for Masked Motion Encoding. (a) Position change and shape change over
time are two key factors to recognize an action, we leverage them to represent the motion trajectory. (b) Compared with the current
appearance reconstruction task, our motion trajectory reconstruction takes into account both appearance and motion information.
Based on this observation, instead of predicting the ap-
pearance contents, we propose to predict motion trajectory ,
which represents impending position and shape changes,
for the mask-and-predict task. Specifically, we use a dense
grid to sample points as different object parts, and then track
these points using optical flow in adjacent frames to gener-
ate trajectories, as shown in Fig. 1(b). The motion trajectory
contains information in two aspects: the position features
that describe relative movement; and the shape features that
describe shape changes of the tracked object along the tra-
jectory. To predict this motion trajectory, the model has to
learn to reason the semantics of masked objects based on
the visible patches, and then learn the correlation of objects
among different frames and try to estimate their accurate
motions. We name the proposed mask-and-predict task as
Masked Motion Encoding (MME).
Moreover, to help the model learn fine-grained motion
details, we further propose to interpolate the motion trajec-
tory. Taking sparsely sampled video as input, the model is
asked to reconstruct spatially and temporally dense motion
trajectories. This is inspired by the video frame interpo-
lation task [77] where a deep model can reconstruct dense
video at the pixel level from sparse video input. Differ-
ent from it, we aim to reconstruct the fine-grained motion
details of moving objects, which has higher-level motion
information and is helpful for understanding actions. Our
main contributions are as follows:
• Existing mask-and-predict task based on appearance
reconstruction is hard to learn important temporal
clues, which is critical for representing video content.
Our Masked Motion Encoding (MME) paradigm over-
comes this limitation by asking the model to recon-
struct motion trajectory.
• Our motion interpolation scheme takes a sparsely sam-
pled video as input and then predicts dense motion tra-
jectory in both spatial and temporal dimensions. This
scheme endows the model to capture long-term and
fine-grained motion clues from sparse video input.Extensive experimental results on multiple standard video
recognition benchmarks prove that the representations
learned from the proposed mask-and-predict task achieve
state-of-the-art performance on downstream action recog-
nition tasks. Specifically, pre-trained on Kinetics-400 [10],
our MME brings the gain of 2.3% on Something-Something
V2 [34], 0.9% on Kinetics-400, 0.4% on UCF101 [59], and
4.7% on HMDB51 [44].
|
Turki_SUDS_Scalable_Urban_Dynamic_Scenes_CVPR_2023 | Abstract
We extend neural radiance fields (NeRFs) to dynamic
large-scale urban scenes. Prior work tends to reconstruct
single video clips of short durations (up to 10 seconds). Two
reasons are that such methods (a) tend to scale linearly with
the number of moving objects and input videos because a
separate model is built for each and (b) tend to require su-
pervision via 3D bounding boxes and panoptic labels, ob-
tained manually or via category-specific models. As a step
towards truly open-world reconstructions of dynamic cities,
we introduce two key innovations: (a) we factorize the scene
into three separate hash table data structures to efficiently
encode static, dynamic, and far-field radiance fields, and
(b) we make use of unlabeled target signals consisting of
RGB images, sparse LiDAR, off-the-shelf self-supervised
2D descriptors, and most importantly, 2D optical flow. Op-
erationalizing such inputs via photometric, geometric, and
feature-metric reconstruction losses enables SUDS to de-
compose dynamic scenes into the static background, indi-
vidual objects, and their motions. When combined with our
multi-branch table representation, such reconstructions can
be scaled to tens of thousands of objects across 1.2 million
frames from 1700 videos spanning geospatial footprints of
hundreds of kilometers, (to our knowledge) the largest dy-
namic NeRF built to date. We present qualitative initial re-
sults on a variety of tasks enabled by our representations,
including novel-view synthesis of dynamic urban scenes,
unsupervised 3D instance segmentation, and unsupervised
3D cuboid detection. To compare to prior work, we also
evaluate on KITTI and Virtual KITTI 2, surpassing state-of-
the-art methods that rely on ground truth 3D bounding box
annotations while being 10x quicker to train.
| 1. Introduction
Scalable geometric reconstructions of cities have trans-
formed our daily lives, with tools such as Google Maps and
Streetview [ 6] becoming fundamental to how we navigate
and interact with our environments. A watershed moment
*Work done as an intern at Argo AI.
Figure 1. SUDS. We scale neural reconstructions to city scale by
dividing the area into multiple cells and training hash table rep-
resentations for each. We show our full city-scale reconstruction
above and the derived representations below. Unlike prior meth-
ods, our approach handles dynamism across multiple videos, dis-
entangling dynamic objects from static background and modeling
shadow effects. We use unlabeled inputs to learn scene flow and
semantic predictions, enabling category- and object-level scene
manipulation.
in the development of such technology was the ability to
scale structure-from-motion (SfM) algorithms to city-scale
footprints [ 4]. Since then, the advent of Neural Radiance
Fields (NeRFs) [ 33] has transformed this domain by allow-
ing for photorealistic interaction with a reconstructed scene
via view synthesis.
Recent works have attempted to scale such represen-
tations to neighborhood-scale reconstructions for virtual
drive-throughs [ 47] and photorealistic fly-throughs [ 52].
However, these maps remain static and frozen in time.
This makes capturing bustling human environments—
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12375
complete with moving vehicles, pedestrians, and objects—
impossible, limiting the usefulness of the representation.
Challenges. One possible solution is a dynamic NeRF
that conditions on time or warps a canonical space with a
time-dependent deformation [ 38]. However, reconstruct-
ing dynamic scenes is notoriously challenging because the
problem is inherently under-constrained, particularly when
input data is constrained to limited viewpoints, as is typi-
cal from egocentric video capture [ 20]. One attractive so-
lution is to scale up reconstructions to many videos, per-
haps collected at different days (e.g., by an autonomous ve-
hicle fleet). However, this creates additional challenges in
jointly modeling fixed geometry that holds for all time (such
as buildings), geometry that is locally static but transient
across the videos (such as a parked car), and geometry that
is truly dynamic (such as a moving person).
SUDS. In this paper, we propose SUDS: Scalable Ur-
ban Dynamic Scenes, a 4D representation that targets both
scale anddynamism . Our key insight is twofold; (1) SUDS
makes use of a rich suite of informative but freely avail-
able input signals, such as LiDAR depth measurements and
optical flow. Other dynamic scene representations [ 27,37]
require supervised inputs such as panoptic segmentation la-
bels or bounding boxes, which are difficult to acquire with
high accuracy for our in-the-wild captures. (2) SUDS de-
composes the world into 3 components: a static branch
that models stationary topography that is consistent across
videos, a dynamic branch that handles both transient (e.g.,
parked cars) and truly dynamic objects (e.g., pedestrians),
and an environment map that handles far-field objects and
sky. We model each branch using a multi-resolution hash
table with scene partitioning, allowing SUDS to scale to an
entire city spanning over 100 km2.
Contributions. We make the following contributions:
(1) to our knowledge, we build the first large-scale dynamic
NeRF, (2) we introduce a scalable three-branch hash table
representation for 4D reconstruction, (3) we present state-
of-the-art reconstruction on 3 different datasets. Finally,
(4) we showcase a variety of downstream tasks enabled by
our representation, including free-viewpoint synthesis, 3D
scene flow estimation, and even unsupervised instance seg-
mentation and 3D cuboid detection.
|
Tastan_CaPriDe_Learning_Confidential_and_Private_Decentralized_Learning_Based_on_Encryption-Friendly_CVPR_2023 | Abstract
Large volumes of data required to train accurate deep
neural networks (DNNs) are seldom available with any sin-
gle entity. Often, privacy concerns prevent entities from
sharing data with each other or with a third-party learning
service provider. While cross-silo federated learning (FL)
allows collaborative learning of large DNNs without shar-
ing the data itself, most existing cross-silo FL algorithms
have an unacceptable utility-privacy trade-off. In this work,
we propose a framework called Confidential andPrivate
Decentralized (CaPriDe) learning, which optimally lever-
ages the power of fully homomorphic encryption (FHE) to
enable collaborative learning without compromising on the
confidentiality and privacy of data. In CaPridDe learn-
ing, participating entities release their private data in an
encrypted form allowing other participants to perform in-
ference in the encrypted domain. The crux of CaPriDe
learning is mutual knowledge distillation between multi-
ple local models through a novel distillation loss, which is
an approximation of the Kullback-Leibler (KL) divergence
between the local predictions and encrypted inferences of
other participants on the same data that can be computed
in the encrypted domain. Extensive experiments on three
datasets show that CaPriDe learning can improve the ac-
curacy of local models without any central coordination,
provide strong guarantees of data confidentiality and pri-
vacy, and has the ability to handle statistical heterogene-
ity. Constraints on the model architecture (arising from the
need to be FHE-friendly), limited scalability, and compu-
tational complexity of encrypted domain inference are the
main limitations of the proposed approach. The code can
be found at https://github.com/tnurbek/capride-learning.
| 1. Introduction
Rapid strides have been made in machine learning (ML)
(in particular, deep learning) over the past decade. How-
ever, in many important application domains such as health-care and finance, the absence of large, centralized datasets
is a significant obstacle to realizing the full benefits of deep
learning algorithms. Data in these applications often resides
in silos and is governed by strict regulations (e.g., HIPAA,
GDPR, etc.) because of its privacy sensitive nature [22].
Competing business interests of data owners and the lack
of appropriate incentives for data sharing further accentuate
the problem. To overcome these issues, there is a need for
collaborative learning algorithms that ideally satisfy the fol-
lowing requirements [15]: (i) confidentiality - no sharing of
raw data, (ii) privacy - minimal leakage of information via
the knowledge exchange mechanism (e.g., gradients or pre-
dictions), (iii) utility - gain in accuracy (over the individual
models) resulting from the collaboration, even in the pres-
ence of statistical heterogeneity, (iv) efficiency - minimize
computational complexity and communication burden, (v)
robustness - handle unintentional failures and attacks em-
anating from malicious entities, and (vi) fairness - utility
should be proportional to the individual contributions.
Federated learning (FL) [15] is a special case of collab-
orative learning, which works under the orchestration of a
central server. FL allows multiple entities to collaboratively
solve a ML problem by sharing of focused updates (e.g.,
gradients), instead of raw data. Specifically, cross-silo FL
(typically between 2-100 participants) has been touted as a
promising solution to address the data fragmentation prob-
lem in finance [8] and healthcare [16,23]. Most prior cross-
silo FL algorithms assume that all the parties are collec-
tively training a single model with a common architecture ,
which is too restrictive in practice. Furthermore, knowl-
edge exchange usually happens through sharing of gradi-
ents or model parameters . Recent gradient inversion at-
tacks [9, 14] demonstrate that it is indeed possible to re-
cover high fidelity information from individual gradient up-
dates, thus violating the privacy requirement. While dif-
ferential privacy (DP) [24], secure multi-party computation
(MPC) [26], and trusted execution environment (TEE) [21]
have been proposed as potential remedies to safeguard pri-
vacy in FL, none of the existing solutions offer an accept-
able privacy-utility trade-off with reasonable efficiency. As
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8084
noted in [5], sharing of high dimensional gradients/model
parameters is a fundamental privacy limitation of standard
FL methods, which cannot be addressed by simply wrap-
ping around a DP, MPC, or TEE solution.
Confidential and private collaborative learning (CaPC)
[7] is the only method that claims to achieve both confi-
dentiality and privacy in a collaborative setting. In CaPC
learning, each participant is able to query other parties via
a private inference protocol based on 2-party computation
(2PC). However, the answering parties cannot directly re-
veal the inference logits to the querying party because it
leaks information about their local models (e.g., through
model extraction attacks). To circumvent this problem, dif-
ferential privacy, private aggregation of teacher ensembles,
and secure argmax computation through a central privacy
guardian (PG) are employed to output only the predicted la-
bel to the querying party. A drawback of the CaPC approach
is that it achieves all privacy guarantees using the help of
a semi-trusted PG. Moreover, the use of differential pri-
vacy reduces the performance of FL algorithms, especially
if strong privacy guarantees are required. Finally, the use of
MPC requires multiple rounds of communication between
the parties for each query. All other approaches that have
been proposed to achieve knowledge transfer through distil-
lation [18] require a non-private labeled/unlabeled/synthetic
dataset that can be shared among participants.
In this work, we propose a new protocol called
Confidential andPrivateDecentralized (CaPriDe) learning,
where participants learn from each other in a collaborative
setting while preserving their confidentiality and privacy
(see Figure 1). Unlike the 2PC protocols used in CaPC,
we leverage fully homomorphic encryption (FHE) to en-
able participants to publish their encrypted data. Since the
published data is encrypted using the data owner’s pub-
lic key and only the owner can decrypt the data (using
the secret key), confidentiality is preserved. However, the
CaPriDe framework allows other collaborators to perform
encrypted inference on the published (encrypted) data by
applying their own local model. Mutual knowledge distilla-
tion (KD) [13] between the data owner’s local model and the
local models of the collaborators is used to transfer knowl-
edge between models and ensure a collaborative gain. KD is
typically achieved through minimizing the distillation loss
between the student and teacher responses (logits). Since
the collaborators in CaPriDe learning make predictions on
encrypted data, a loss function that can be computed with-
out any decryption is required. Hence, we propose a new
distillation loss, which is an approximation of the KL di-
vergence that can be securely computed. Only an encrypted
value of the distillation loss aggregated over an entire batch
is sent back to the data owner, which ensures strong privacy.
Data owners can decrypt this aggregate loss value and use
it for updating their local models. Our contributions are:• We introduce the CaPriDe learning framework, which
exploits FHE-based encrypted inference and knowl-
edge distillation to achieve confidential and private
collaborative learning without any central orchestra-
tion and any need for non-private shared data.
• To enable CaPriDe learning, we propose an
encryption-friendly distillation loss that estimates
theapproximate KL divergence between two model
predictions and design a protocol to securely compute
this loss in the encrypted domain.
• We conduct extensive experiments to show that
CaPriDe learning enables participants to achieve col-
laborative gain, even in the non-iid setting. To prove
feasibility, we implement encrypted inference using
the Tile Tensors library with a FHE-friendly model.
|
Song_Robust_Single_Image_Reflection_Removal_Against_Adversarial_Attacks_CVPR_2023 | Abstract
This paper addresses the problem of robust deep single-
image reflection removal (SIRR) against adversarial at-
tacks. Current deep learning based SIRR methods have
shown significant performance degradation due to unno-
ticeable distortions and perturbations on input images. For
a comprehensive robustness study, we first conduct diverse
adversarial attacks specifically for the SIRR problem, i.e.
towards different attacking targets and regions. Then we
propose a robust SIRR model, which integrates the cross-
scale attention module, the multi-scale fusion module, and
the adversarial image discriminator. By exploiting the
multi-scale mechanism, the model narrows the gap between
features from clean and adversarial images. The image dis-
criminator adaptively distinguishes clean or noisy inputs,
and thus further gains reliable robustness. Extensive ex-
periments on Nature, SIR2, and Real datasets demonstrate
that our model remarkably improves the robustness of SIRR
across disparate scenes.
| 1. Introduction
Single image reflection removal (SIRR) is a classic topic
in the low-level image processing area, namely a kind of
image restoration. When taking an image through a trans-
parent surface, a reflection layer would be blended with the
original photography ( i.e. the transmission layer), resulting
in imaging corruptions. The SIRR is devoted to recovering
a clear transmission image by removing the reflection layer.
However, the SIRR is fundamentally ill-posed [42] that
there could be an infinite number of transmission and re-
flection decompositions from a blended image. Therefore,
traditional methods often exploit manual priors to optimize
the layer separation, such as gradient sparsity prior [25] and
∗Equal contribution
Corresponding Author: <[email protected] >.
/uni00000026/uni0000004f/uni00000048/uni00000044/uni00000051 /uni00000030/uni00000036/uni00000028/uni00000042/uni00000029/uni00000035 /uni0000002f/uni00000033/uni0000002c/uni00000033/uni00000036/uni00000042/uni00000029/uni00000035 /uni00000030/uni00000036/uni00000028/uni00000042/uni00000035/uni00000035 /uni0000002f/uni00000033/uni0000002c/uni00000033/uni00000036/uni00000042/uni00000035/uni00000035 /uni00000030/uni00000036/uni00000028/uni00000042/uni00000031/uni00000035 /uni0000002f/uni00000033/uni0000002c/uni00000033/uni00000036/uni00000042/uni00000031/uni00000035/uni00000033/uni00000036/uni00000031/uni00000035/uni0000000b/uni00000047/uni00000025/uni0000000c
/uni00000015/uni00000013/uni00000011/uni0000001c/uni0000001a
/uni00000015/uni00000013/uni00000011/uni00000016/uni00000015/uni00000014/uni0000001b/uni00000011/uni00000018/uni00000016
/uni00000014/uni00000013/uni00000011/uni00000016/uni00000018/uni00000014/uni0000001c/uni00000011/uni0000001c/uni0000001b
/uni00000014/uni00000018/uni00000011/uni00000014/uni00000018/uni00000015/uni00000013/uni00000011/uni0000001b/uni00000017
/uni00000014/uni0000001b/uni00000011/uni0000001c/uni0000001b/uni00000015/uni00000013/uni00000011/uni00000019/uni00000019
/uni00000014/uni0000001b/uni00000011/uni00000018/uni00000015/uni00000014/uni0000001b/uni00000011/uni00000018/uni00000017
/uni00000014/uni00000014/uni00000011/uni00000015/uni00000014/uni0000001c/uni00000011/uni0000001c/uni0000001c
/uni00000014/uni00000019/uni00000011/uni00000015/uni00000016/uni00000025/uni00000044/uni00000056/uni00000048/uni0000004f/uni0000004c/uni00000051/uni00000048
/uni00000015/uni00000013/uni00000011/uni00000013/uni00000017/uni00000032/uni00000058/uni00000055/uni00000056/uni00000003/uni0000005a/uni00000003/uni00000044/uni00000047/uni00000059/uni00000011
/uni00000032/uni00000058/uni00000055/uni00000056/uni00000003/uni0000005a/uni00000012/uni00000052/uni00000003/uni00000044/uni00000047/uni00000059/uni00000011Figure 1. The PSNR measurements of our approach under dif-
ferent kinds of adversarial attacks. ‘Clean’ indicates no-attacks,
‘MSE FR’ represents attacking on FullRegion with MSE objec-
tive, and so on. The testing result is from Nature dataset [26].
relative smoothness prior [27]. These priors are often vio-
lated when facing complex scenes. Recently, deep learning
based methods [10, 16, 55] have attracted considerable at-
tention to tackle the SIRR problem. By learning semantic
and contextual features, deep SIRR methods have achieved
much better quality of the recovered images.
However, deep neural networks are often vulnerable to
visually imperceptible adversarial perturbations [14, 29].
The prediction can be totally invalid even with slight and
unnoticeable attacks on inputs. Similarly, such vulnerability
is also an important issue for the deep SIRR problem, and
the robustness of current methods has not been thoroughly
studied. There have been no benchmarks and evaluations
for the robustness of deep SIRR models against intended
attacks. Meanwhile, general defense methods [44] have not
been applied to SIRR models. Accordingly, the robust SIRR
model is still a crucial and desiderate research problem.
In this paper, we first investigate the robustness of deep
SIRR methods. We apply the widely-used and powerful
attack method PGD [29] to generate adversarial samples.
For completeness of the robustness evaluation, we present
various attack modes by referring [3, 48]. Specifically, we
employ different attack objectives i.e. mean squared er-
ror (MSE) and learned perceptual image patch similarity
(LPIPS) [52], as well as different attack regions, i.e. the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24688
full image region (FR), the reflection region (RR), and the
non-reflection region (NR). Through a systematic analy-
sis, the most effective attack mode and currently the most
robust SIRR model are identified. Then we conduct ad-
versarial training based on this model to enhance its ro-
bustness. In order to develop a furthermore robust SIRR
model, we borrow the wisdom of multi-scale image pro-
cessing [19] and adversarial discriminating [56] from pre-
vious defense methods. Consequently, we build a new
robust SIRR model based on the image transformer [41],
which integrates the cross-scale attention module, the multi-
scale fusion module, and the adversarial image discrimi-
nator. The proposed method obtains significant improve-
ments in robustness. Fig. 1 reveals the PSNR changes of
our model prediction under distinct attack modes on the Na-
ture dataset [26]. It is notable that our model yields limited
degradations against perturbed images when compared with
input clean images.
Overall, our main contributions can be summarized be-
low. (1) We present a comprehensive evaluation of existing
deep SIRR methods in terms of their robustness against var-
ious adversarial attacks on diverse datasets. Extensive ex-
perimental studies suggest presently the most effective at-
tack and the most robust SIRR model. (2) We propose a
novel transformer-based SIRR model, which integrates sev-
eral relatively-robust modules to defend against adversarial
attacks. The model can mitigate the effects of perturbations
and distinguish clean or polluted inputs as well. (3) We
carry out sufficient experiments to analyze the robustness
of the proposed method, which achieves state-of-the-art sta-
bility against adversarial images. The model performs supe-
rior reflection removal robustness on distorted images while
maintaining favorable accuracy on original clean images.
|
Sun_Indiscernible_Object_Counting_in_Underwater_Scenes_CVPR_2023 | Abstract
Recently, indiscernible scene understanding has at-
tracted a lot of attention in the vision community. We
further advance the frontier of this field by systematically
studying a new challenge named indiscernible object count-
ing (IOC), the goal of which is to count objects that are
blended with respect to their surroundings. Due to a
lack of appropriate IOC datasets, we present a large-scale
dataset IOCfish5K which contains a total of 5,637 high-
resolution images and 659,024 annotated center points.
Our dataset consists of a large number of indiscernible ob-
jects (mainly fish) in underwater scenes, making the an-
notation process all the more challenging. IOCfish5K is
superior to existing datasets with indiscernible scenes be-
cause of its larger scale, higher image resolutions, more
annotations, and denser scenes. All these aspects make
it the most challenging dataset for IOC so far, supporting
progress in this area. For benchmarking purposes, we se-
lect 14 mainstream methods for object counting and care-
fully evaluate them on IOCfish5K. Furthermore, we propose
IOCFormer , a new strong baseline that combines density
and regression branches in a unified framework and can
effectively tackle object counting under concealed scenes.
Experiments show that IOCFormer achieves state-of-the-
art scores on IOCfish5K. The resources are available at
github.com/GuoleiSun/Indiscernible-Object-Counting.
| 1. Introduction
Object counting – to estimate the number of object in-
stances in an image – has always been an essential topic in
computer vision. Understanding the counts of each cate-
gory in a scene can be of vital importance for an intelligent
agent to navigate in its environment. The task can be the end
goal or can be an auxiliary step. As to the latter, counting
objects has been proven to help instance segmentation [14],
action localization [54], and pedestrian detection [83]. As to
the former, it is a core algorithm in surveillance [78], crowd
*Corresponding author ([email protected])
Figure 1. Illustration of different counting tasks. Top left : Generic
Object Counting (GOC), which counts objects of various classes
innatural scenes .Top right : Dense Object Counting (DOC),
which counts objects of a foreground class in scenes packed with
instances .Down : Indiscernible Object Counting (IOC), which
counts objects of a foreground class in indiscernible scenes . Can
you find all fishes in the given examples? For GOC, DOC, and
IOC, the images shown are from PASCAL VOC [18], Shang-
haiTech [91], and the new IOCfish5K dataset, respectively.
monitoring [6], wildlife conservation [56], diet patterns un-
derstanding [55] and cell population analysis [1].
Previous object counting research mainly followed two
directions: generic/common object counting (GOC) [8, 14,
32, 68] and dense object counting (DOC) [28, 36, 50, 57,
64, 67, 91]. The difference between these two sub-tasks
lies in the studied scenes, as shown in Fig. 1. GOC tack-
les the problem of counting object(s) of various categories
in natural/common scenes [8], i.e., images from PASCAL
VOC [18] and COCO [41]. The number of objects to be es-
timated is usually small, i.e., less than 10. DOC, on the
other hand, mainly counts objects of a foreground class
in crowded scenes. The estimated count can be hundreds
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13791
Dataset YearIndiscernible
Scene#Ann. IMG Avg. Resolution Free ViewCount StatisticsWebTotal Min Ave Max
UCSD [6] 2008 ✗ 2,000 158×238 ✗ 49,885 11 25 46 Link
Mall [10] 2012 ✗ 2,000 480×640 ✗ 62,325 13 31 53 Link
UCF CC50 [27] 2013 ✗ 50 2101×2888 ✓ 63,974 94 1,279 4,543 Link
WorldExpo’10 [90] 2016 ✗ 3,980 576×720 ✗ 199,923 1 50 253 Link
ShanghaiTech B [91] 2016 ✗ 716 768×1024 ✗ 88,488 9 123 578 Link
ShanghaiTech A [91] 2016 ✗ 482 589×868 ✓ 241,677 33 501 3,139 Link
UCF-QNRF [28] 2018 ✗ 1,535 2013×2902 ✓ 1,251,642 49 815 12,865 Link
Crowd surv [87] 2019 ✗ 13,945 840×1342 ✗ 386,513 2 35 1420 Link
GCC (synthetic) [80] 2019 ✗ 15,212 1080×1920 ✗ 7,625,843 0 501 3,995 Link
JHU-CROWD++ [65] 2019 ✗ 4,372 910×1430 ✓ 1,515,005 0 346 25,791 Link
NWPU-Crowd [79] 2020 ✗ 5,109 2191×3209 ✓ 2,133,375 0 418 20,033 Link
NC4K [51] 2021 ✓ 4,121 530×709 ✓ 4,584 1 1 8 Link
CAMO++ [33] 2021 ✓ 5,500 N/A ✓ 32,756 N/A 6 N/A Link
COD [19] 2022 ✓ 5,066 737×964 ✓ 5,899 1 1 8 Link
IOCfish5K (Ours) 2023 ✓ 5,637 1080×1920 ✓ 659,024 0 117 2,371 Link
Table 1. Statistics of existing datasets for dense object counting (DOC) and indiscernible object counting (IOC).
or even tens of thousands. The counted objects are often
persons (crowd counting) [36, 39, 88], vehicles [26, 57] or
plants [50]. Thanks to large-scale datasets [10,18,28,65,79,
91] and deep convolutional neural networks (CNNs) trained
on them, significant progress has been made both for GOC
and DOC. However, to the best of our knowledge, there is
no previous work on counting indiscernible objects.
Under indiscernible scenes, foreground objects have a
similar appearance, color, or texture to the background and
are thus difficult to be detected with a traditional visual
system. The phenomenon exists in both natural and arti-
ficial scenes [20, 33]. Hence, scene understanding for in-
discernible scenes has attracted increasing attention since
the appearance of some pioneering works [20, 34]. Various
tasks have been proposed and formalized: camouflaged ob-
ject detection (COD) [20], camouflaged instance segmen-
tation (CIS) [33] and video camouflaged object detection
(VCOD) [12, 31]. However, no previous research has fo-
cused on counting objects in indiscernible scenes, which is
an important aspect.
In this paper, we study the new indiscernible object
counting (IOC ) task, which focuses on counting foreground
objects in indiscernible scenes. Fig. 1 illustrates this chal-
lenge. Tasks such as image classification [17, 24], seman-
tic segmentation [11, 42] and instance segmentation [3, 23]
all owe their progress to the availability of large-scale
datasets [16, 18, 41]. Similarly, a high-quality dataset for
IOC would facilitate its advancement. Although existing
datasets [20, 33, 51] with instance-level annotations can be
used for IOC, they have the following limitations: 1) the
total number of annotated objects in these datasets is lim-
ited, and image resolutions are low; 2) they only contain
scenes/images with a small instance count; 3) the instance-
level mask annotations can be converted to point supervi-
sion by computing the centers of mass, but the computed
points do not necessarily fall inside the objects.
To facilitate the research on IOC, we construct a large-
scale dataset, IOCfish5K. We collect 5,637 images withindiscernible scenes and annotate them with 659,024 cen-
ter points. Compared with the existing datasets, the pro-
posed IOCfish5K has several advantages: 1) it is the largest-
scale dataset for IOC in terms of the number of images,
image resolution, and total object count; 2) the images in
IOCfish5K are carefully selected and contain diverse indis-
cernible scenes; 3) the point annotations are accurate and
located at the center of each object. Our dataset is com-
pared with existing DOC and IOC datasets in Table 1, and
example images are shown in Fig. 2.
Based on the proposed IOCfish5K dataset, we provide a
systematic study on 14 mainstream baselines [32,36,39,40,
45, 47, 52, 66, 73, 76, 89, 91]. We find that methods which
perform well on existing DOC datasets do not necessarily
preserve their competitiveness on our challenging dataset.
Hence, we propose a simple and effective approach named
IOCFormer. Specifically, we combine the advantages of
density-based [76] and regression-based [39] counting ap-
proaches. The former can estimate the object density
across the image, while the latter directly regresses the co-
ordinates of points, which is straightforward and elegant.
IOCFormer contains two branches: density and regression.
The density-aware features from the density branch help
make indiscernible objects stand out through the proposed
density-enhanced transformer encoder (DETE). Then the
refined features are passed through a conventional trans-
former decoder, after which predicted object points are gen-
erated. Experiments show that IOCFormer outperforms all
considered algorithms, demonstrating its effectiveness on
IOC. To summarize, our contributions are three-fold.
• We propose the new indiscernible object counting
(IOC) task. To facilitate research on IOC, we con-
tribute a large-scale dataset IOCfish5K, containing
5,637 images and 659,024 accurate point labels.
• We select 14 classical and high-performing approaches
for object counting and evaluate them on the proposed
IOCfish5K for benchmarking purposes.
13792
• We propose a novel baseline, namely IOCFormer,
which integrates density-based and regression-based
methods in a unified framework. In addition, a novel
density-based transformer encoder is proposed to grad-
ually exploit density information from the density
branch to help detect indiscernible objects.
|
Tian_Trainable_Projected_Gradient_Method_for_Robust_Fine-Tuning_CVPR_2023 | Abstract
Recent studies on transfer learning have shown that se-
lectively fine-tuning a subset of layers or customizing dif-
ferent learning rates for each layer can greatly improve ro-
bustness to out-of-distribution (OOD) data and retain gen-
eralization capability in the pre-trained models. However,
most of these methods employ manually crafted heuristics
or expensive hyper-parameter searches, which prevent them
from scaling up to large datasets and neural networks. To
solve this problem, we propose Trainable Projected Gradi-
ent Method (TPGM) to automatically learn the constraint
imposed for each layer for a fine-grained fine-tuning reg-
ularization. This is motivated by formulating fine-tuning
as a bi-level constrained optimization problem. Specifi-
cally, TPGM maintains a set of projection radii, i.e., dis-
tance constraints between the fine-tuned model and the pre-
trained model, for each layer, and enforces them through
weight projections. To learn the constraints, we propose
a bi-level optimization to automatically learn the best set
of projection radii in an end-to-end manner. Theoretically,
we show that the bi-level optimization formulation is the
key to learning different constraints for each layer. Em-
pirically, with little hyper-parameter search cost, TPGM
outperforms existing fine-tuning methods in OOD perfor-
mance while matching the best in-distribution (ID) per-
formance. For example, when fine-tuned on DomainNet-
Real and ImageNet, compared to vanilla fine-tuning, TPGM
shows 22% and 10% relative OOD improvement respec-
tively on their sketch counterparts. Code is available at
https://github.com/PotatoTian/TPGM .
| 1. Introduction
Improving out-of-distribution (OOD) robustness such
that a vision model can be trusted reliably across a variety of
conditions beyond the in-distribution (ID) training data has
been a central research topic in deep learning. For example,
*Work partially done during internship at Meta.Pre-trained !!Fine-tuned !"Projected !"#(!)#(%&')#(%)…
Param. 0Param. $−1Param. $Figure 1. Illustration of TPGM. TPGM learns different weight
projection radii, , for each layer between a fine-tuned model ✓t
and a pre-trained model ✓0and enforces the constraints through
projection to obtain a projected model ˜✓.
domain adaptation [ 39,46], domain generalization [ 25,50],
and out-of-distribution calibration [ 33] are examples of re-
lated fields. More recently, large pre-trained models, such
as CLIP [ 28] (pre-trained on 400M image-text pairs), have
demonstrated large gains in OOD robustness, thanks to the
ever-increasing amount of pre-training data as well as ef-
fective architectures and optimization methods. However,
fine-tuning such models to other tasks generally results in
worse OOD generalization as the model over-fits to the new
data and forgets the pre-trained features [ 28].A natural goal
is to preserve the generalization capability acquired by the
pre-trained model when fine-tuning it to a downstream task.
A recent empirical study shows that aggressive fine-
tuning strategies such as using a large learning rate can de-
crease OOD robustness [ 41]. We hypothesize that the for-
getting of the generalization capability of the pre-trained
model in the course of fine-tuning is due to unconstrained
optimization on the new training data [ 44]. This conjec-
ture is not surprising, because several prior works, even
though they did not focus on OOD robustness, have dis-
covered that encouraging a close distance to the pre-trained
model weights can improve ID generalization, i.e., avoid-
ing over-fitting to the training data [ 9,44]. Similarly, if suit-
able distance constraints are enforced, we expect the model
to behave more like the pre-trained model and thus retain
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7836
more of its generalization capability. The question is where
to enforce distance constraints and how to optimize them?
Several works have demonstrated the importance of
treating each layer differently during fine-tuning. For ex-
ample, a new work [ 22] discovers that selectively fine-
tuning a subset of layers can lead to improved robustness
to distribution shift. Another work [ 31] shows that opti-
mizing a different learning rate for each layer is benefi-
cial for few-shot learning. Therefore, we propose to en-
force a different constraint for each layer. However, ex-
isting works either use manually crafted heuristics or ex-
pensive hyper-parameter search, which prevent them from
scaling up to large datasets and neural networks. For exam-
ple, the prior work [ 31] using evolutionary search for hyper-
parameters can only scale up to a custom 6-layer ConvNet
and a ResNet-12 for few-shot learning. The computation
and time for searching hyper-parameters become increas-
ingly infeasible for larger datasets, let alone scaling up the
combinatorial search space to all layers. For example, a
ViT-base [ 37] model has 154 trainable parameter groups in-
cluding both weights, biases, and embeddings1. This leads
to a search space with more than 1045combinations even if
we allow only two choices per constraint parameter, which
makes the search prohibitively expensive.
tian2021geometricTo solve this problem, we propose
a trainable projected gradient method (TPGM) to support
layer-wise regularization optimization. Specifically, TPGM
adopts trainable weight projection constraints , which we
refer to as projection radii , and incorporates them in the
forward pass of the main model to optimize. Intuitively, as
shown in Fig. 1, TPGM maintains a set of weight projection
radii i.e., the distance between the pre-trained model
(✓0) and the current fine-tuned model ( ✓t), for each layer
of a neural network and updates them. The projection
radii control how much ”freedom” each layer has to grow.
For example, if the model weights increase outside of the
norm ball defined by andk·k, the projection operator
will project them back to be within the constraints. To
learn the weight projection radii in a principled manner, we
propose to use alternating optimization between the model
weights and the projection radii, motivated by formulating
fine-tuning as a bi-level constrained problem (Sec. 3.1). We
theoretically show that the bi-level formulation is the key
to learning constraints for each layer (Sec. 3.4).
Empirically, we conduct thorough experiments on large-
scale datasets, DomainNet [ 27] and ImageNet [ 4], using dif-
ferent architectures. Under the premise of preserving ID
performance, i.e., OOD robustness should not come at the
expense of worse ID accuracy, TPGM outperforms exist-
ing approaches with little effort for hyper-parameter tun-
ing. Further analysis of the learned projection radii reveals
1For example, for a linear layer y=Wx+b, we need to use separate
distance constraints for Wandb.that lower layers (layers closer to the input) in a network
require stronger regularization while higher layers (layers
closer to the output) need more flexibility. This observation
is in line with the common belief that lower layers learn
more general features while higher layers specialize to each
dataset [ 26,29,41,45]. Therefore, when conducting trans-
fer learning such as fine-tuning, we need to treat each layer
differently. Our contributions are summarized below.
•We propose a trainable projected gradient method
(TPGM) for fine-tuning to automatically learn the
distance constraints for each layer in a neural network
during fine-tuning.
•We conduct experiments on different datasets and
architectures to show significantly improved OOD
generalization while matching ID performance.
•We theoretically study TPGM using linear models to
show that bi-level optimization is the key to learning
different constraints for each layer
|
Sun_Learning_Semantic-Aware_Disentangled_Representation_for_Flexible_3D_Human_Body_Editing_CVPR_2023 | Abstract
3D human body representation learning has received in-
creasing attention in recent years. However, existing works
cannot flexibly, controllably and accurately represent hu-
man bodies, limited by coarse semantics and unsatisfac-
tory representation capability, particularly in the absence
of supervised data. In this paper, we propose a human
body representation with fine-grained semantics and high
reconstruction-accuracy in an unsupervised setting. Specif-
ically, we establish a correspondence between latent vectors
and geometric measures of body parts by designing a part-
aware skeleton-separated decoupling strategy, which facili-
tates controllable editing of human bodies by modifying the
corresponding latent codes. With the help of a bone-guided
auto-encoder and an orientation-adaptive weighting strat-
egy, our representation can be trained in an unsupervised
manner. With the geometrically meaningful latent space, it
can be applied to a wide range of applications, from human
body editing to latent code interpolation and shape style
transfer. Experimental results on public datasets demon-
strate the accurate reconstruction and flexible editing abil-
ities of the proposed method. The code will be available
*Corresponding authorathttp://cic.tju.edu.cn/faculty/likun/
projects/SemanticHuman .
| 1. Introduction
Learning low-dimensional representations of human
bodies plays an important role in various applications in-
cluding human body reconstruction [4, 19, 32, 37], gener-
ation [7, 30, 31] and editing [35, 36, 39]. Existing meth-
ods [2, 18, 22, 25, 29] are either too semantically coarse
to enable personalized human body editing, or suffer from
poor reconstruction performance due to limited representa-
tion capability. This paper aims to develop a fine-grained
semantic-aware human body representation with flexible
representation ability.
Since human bodies are rich in variations of poses and
shapes, traditional linear models [1, 25, 29, 35, 36] can-
not handle complex nonlinear structures of human body
meshes accurately. Therefore, parametric models have been
proposed for better representation. The landmark works
SCAPE [2] and SMPL [22] represent human bodies by the
shape and pose parameters. However, the semantics of their
shape parameters are not sufficiently precise, making it im-
possible to flexibly edit the body shape. Furthermore, the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16985
representation ability of these methods is limited by the lin-
ear shape space of human body shapes, and hence their re-
construction accuracy is often unsatisfactory.
With the success of deep learning, the encoder-decoder
architecture has demonstrated excellent representation ca-
pability [7,10,13,14,26]. Such methods improve the recon-
struction precision by constructing different convolution-
like operators for feature extraction on irregular meshes.
However, these works lack disentangled representation and
fail to obtain promising results for geometrically complex
human body parts. Several works [3,9,11,18,38] pursue the
disentanglement of latent representations, i.e., each latent
code has clear semantics. But these methods either require
paired supervised data or have poor performance on the re-
construction, which significantly affects their generalization
and robustness. In addition, the semantics of the above rep-
resentations are coarse, which only enables person-level at-
tribute transfer and cannot be applied to part-level flexible
editing.
In this paper, we aim to build a human body represen-
tation with fine-grained semantics and high reconstruction-
accuracy in an unsupervised setting, which needs to over-
come two main challenges. First, how to disentangle the
human body to reconstruct precise semantics is a key but
difficult problem. Although it is straightforward to decom-
pose a human body into articulated parts for part-level edit-
ing, the hidden space of each part is still coupled. Secondly,
providing paired supervised data requires a lot of manual
effort, and it is very challenging to make the representation
disentangled without sacrificing reconstruction accuracy in
an unsupervised manner.
To address these challenges, we propose SemanticHu-
man, an editable human body representation with fine-
grained semantics and high reconstruction-precision, which
facilitates controllable human body editing without paired
supervised data. To reconstruct fine-grained semantics, we
design a part-aware skeleton-separated decoupling strategy
with anatomical priors of the human body. Specifically,
we disentangle body part variations into bone-related vari-
ations ( e.g., length and orientation variations) and bone-
independent variations ( e.g., circumference variations). In
contrast to the previous pose and shape disentanglement
on the entire person [2, 18, 22], this part-aware skeleton-
separated decoupling strategy establishes a correspondence
between latent vectors and geometric properties of body
parts, which benefits part-level controllable editing.
To ensure high reconstruction accuracy and fine-grained
semantics of the representation by unsupervised learn-
ing, we propose a bone-guided autoencoder architecture
and an orientation-adaptive geometry-preserving loss. The
bone-guided auto-encoder fuses the geometric features of
body parts with their joint information to achieve accu-
rate and efficient modeling of human bodies. Besides,an orientation-adaptive weighting strategy is introduced to
compute the geometry-preserving loss, which can provide
effective geometric regularization for unsupervised disen-
tanglement and part-level editing. Experimental results
on two public datasets with different mesh connectivities
demonstrate the high reconstruction-precision and control-
lable editing capability of the proposed method. An ex-
ample is given in Fig. 1. The code will be available
athttp://cic.tju.edu.cn/faculty/likun/
projects/SemanticHuman .
Our main contributions are summarized as follows:
• We propose a semantic-aware and editable human
body representation with fine-grained representation
ability. The latent space of our approach facilitates per-
sonalized editing of human bodies by modifying their
latent vectors.
• We propose a part-aware skeleton-separated decou-
pling strategy exploiting structural priors of the human
body to learn geometrically meaningful latent codes
with fine-grained semantics.
• We propose a bone-guided auto-encoder architecture
and an orientation-adaptive geometry-preserving loss
to ensure the robust and effective disentanglement of
the representation learned without supervision.
|
Tomar_TeSLA_Test-Time_Self-Learning_With_Automatic_Adversarial_Augmentation_CVPR_2023 | Abstract
Most recent test-time adaptation methods focus on only
classification tasks, use specialized network architectures,
destroy model calibration or rely on lightweight information
from the source domain. To tackle these issues, this paper
proposes a novel Test-time Self-Learning method with auto-
matic Adversarial augmentation dubbed TeSLA for adapt-
ing a pre-trained source model to the unlabeled streaming
test data. In contrast to conventional self-learning meth-
ods based on cross-entropy, we introduce a new test-time
loss function through an implicitly tight connection with
the mutual information and online knowledge distillation.
Furthermore, we propose a learnable efficient adversarial
augmentation module that further enhances online knowl-
edge distillation by simulating high entropy augmented im-
ages. Our method achieves state-of-the-art classification
and segmentation results on several benchmarks and types
of domain shifts, particularly on challenging measurement
shifts of medical images. TeSLA also benefits from several de-
sirable properties compared to competing methods in terms
of calibration, uncertainty metrics, insensitivity to model
architectures, and source training strategies, all supported
by extensive ablations. Our code and models are available
at https://github.com/devavratTomar/TeSLA.
| 1. Introduction
Deep neural networks (DNNs) perform exceptionally
well when the training ( source ) and test ( target ) data fol-
low the same distribution. However, distribution shifts are
inevitable in real-world settings and propose a major chal-
lenge to the performance of deep networks after deployment.
Also, access to the labeled training data may be infeasible
at test time due to privacy concerns or transmission band-
width. In such scenarios, source-free domain adaptation
(SFDA ) [1, 23, 25] and test-time adaptation (TTA ) meth-
ods [19, 28, 39] aim to adapt the pre-trained source model
to the unlabeled distributionally shifted target domain while
easing access to source data. While SFDA methods have
access to all full target data through multiple training epochs
Soft-pseudo Label
Unaugmented
Soft-Pseudo Label
Augmented
Augmentation
Direction
Correctly
Classified
Incorrectly
Classified
Augmented
Image
Uncertainty
Region(b) Hard Image to
Hard Augmentation
Decision
Boundary
Decision
Boundary
(a) Easy Image to
Hard Augmentation
Figure 1. Knowledge distillation with adversarial augmenta-
tions. (a) Easy images with confident soft-pseudo labels and (b)
Hard images with unconfident soft-pseudo labels are adversarially
augmented and pushed to the uncertainty region (high entropy) near
the decision boundary. The model is updated for (a)to match its
output on the augmented views with non-augmented views of Easy
test images using KL-Divergence Lkd̸= 0, while not updated for
(b)asLkd∼0between Hard images and their augmented views.
(offline setup), TTA methods usually process test images in
an online streaming fashion and represent a more realistic
domain adaptation. However, most of these methods are
applied: (i) only to classification tasks, (ii) evaluated on the
non-real-world domain shifts, e.g., the non-measurement
shift; (iii) destroy model calibration—entropy minimizing
with overconfident predictions [45] on incorrectly classified
samples, and (iv) use specialized network architectures or
rely on the source dataset feature statistics [28].
We address these issues by proposing a new test-time
adaptation method with automatic adversarial augmentation
called TeSLA , under which we further define realistic TTA
protocols. Self-learning methods often supervise the model
adaptation on the unlabeled test images using their predicted
pseudo-labels. As the model can easily overfit on its own
pseudo-labels, a weight-averaged teacher model (slowly up-
dated by the student model) is employed for obtaining the
pseudo-labels [41, 47]. The student model is then trained
with cross-entropy ( CE) loss between the one-hot pseudo-
labels and its predictions on the test images. In this paper, we
instead propose to minimize flipped cross-entropy between
the student model’s predictions and the soft pseudo-labels
(notice the reverse order) with the negative entropy of its
marginalized predictions over the test images. In Sec. 3,
we show that the proposed formulation is an equivalence
to mutual information maximization implicitly corrected
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20341
by the teacher-student knowledge distillation via pseudo-
labels, yielding performance improvement on various test-
time adaptation protocols and compared to the basic CE
optimization (cf. ablation in Fig. 5-(1)).
Motivated by teacher-student knowledge distillation, an-
other central tenet of our method is to assist the student
model during adaptation in improving its performance on
the hard-to-classify (high entropy) test images. For this
purpose, we propose learning automatic adversarial aug-
mentations (see Fig. 1) as a proxy for simulating images
in the uncertainty region of the feature space. The model is
then updated to ensure consistency between predictions of
high entropy augmented images and soft-pseudo labels from
the respective non-augmented versions. Consequently, the
model is self-distilled on Easy test images with confident
soft-pseudo labels (Fig. 1a). In contrast, the model update
onHard test images is discarded (Fig. 1b), resulting in better
class-wise feature separation.
In summary, our contributions are: (i) we propose a novel
test-time self-learning method based on flipped cross-entropy
(f-CE) through the tight connection with the mutual informa-
tion between the model’s predictions and the test images; (ii)
we propose an efficient plug-in test-time automatic adversar-
ial augmentation module used for online knowledge distilla-
tion from the teacher to the student network that consistently
improves the performance of test-time adaptation methods,
including ours; and (iii) TeSLA achieves new state-of-the-art
results on several benchmarks, from common image corrup-
tion to realistic measurement shifts for classification and
segmentation tasks. Furthermore, TeSLA outperforms ex-
isting TTA methods in terms of calibration and uncertainty
metrics while making no assumptions about the network
architecture and source domain’s information, e.g., feature
statistics or training strategy.
|