title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Zhang_Prompt_Generate_Then_Cache_Cascade_of_Foundation_Models_Makes_Strong_CVPR_2023 | Abstract
Visual recognition in low-data regimes requires deep neural
networks to learn generalized representations from limited
training samples. Recently, CLIP-based methods have
shown promising few-shot performance benefited from the
contrastive language-image pre-training. We then question,
if the more diverse pre-training knowledge can be cascaded
to further assist few-shot representation learning. In this
paper, we propose CaFo , aCascade of Foundation models
that incorporates diverse prior knowledge of various pre-
training paradigms for better few-shot learning. Our CaFo
incorporates CLIP’s language-contrastive knowledge,
DINO’s vision-contrastive knowledge, DALL-E’s vision-
generative knowledge, and GPT-3’s language-generative
knowledge. Specifically, CaFo works by ‘Prompt, Generate,
then Cache’. Firstly, we leverage GPT-3 to produce textual
inputs for prompting CLIP with rich downstream linguistic
semantics. Then, we generate synthetic images via DALL-E
to expand the few-shot training data without any manpower.
At last, we introduce a learnable cache model to adaptively
blend the predictions from CLIP and DINO. By such col-
laboration, CaFo can fully unleash the potential of different
pre-training methods and unify them to perform state-of-
the-art for few-shot classification. Code is available at
https://github.com/ZrrSkywalker/CaFo .
| 1. Introduction
Convolutional neural networks [44] and transform-
ers [68] have attained great success on a wide range of vi-
sion tasks with abundant datasets [15]. Instead, for some
data-deficient and resource-finite scenarios, few-shot learn-
ing [63,70] also becomes a research hotspot, where the net-
∗Equal contribution. †Corresponding author
aphotoofdog.
aaphotoofdog.a
CLIPDINODALL-EcontrastcontrastgenerateCaFoLanguage-contrastiveKnowledgeVision-generativeKnowledgeVision-contrastive Knowledge
adoglookslike…GPT-3generateWhat a dog looks like?adoglookslike…
Language-generativeKnowledgeFigure 1. The Cascade Paradigm of CaFo. We adaptively incor-
porate the knowledge from four types of pre-training methods and
achieve a strong few-shot learner.
works are constrained to learn from limited images with
annotations. Many previous works have been proposed
in this field to enhance model’s generalization capability
by meta learning [20, 71], metric learning [74], and data
augmentation [31, 73]. Recently, CLIP [57] pre-trained
by large-scale language-image pairs shows favorable zero-
shot transfer ability for open-vocabulary visual recogni-
tion. The follow-up CoOp [83], CLIP-Adapter [24] and
Tip-Adapter [76] further extend it for few-shot classifica-
tion and achieve superior performance on various down-
stream datasets. This indicates that, even if the few-shot
training data is insufficient, the large-scale pre-training has
endowed the network with strong representation ability,
which highly benefits the few-shot learning on downstream
domains. Now that there exist various self-supervisory
paradigms besides CLIP, could we adaptively integrate their
pre-learned knowledge and collaborate them to be a better
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15211
few-shot learner?
To tackle this issue, we propose CaFo , aCascade of
Fooundation models blending the knowledge from mul-
tiple pre-training paradigms with a ‘Prompt, Generate,
then Cache’ pipeline. As shown in Figure 1, we in-
tegrate CLIP [57], DINO [7], DALL-E [58], and GPT-
3 [4] to provide four types of prior knowledge for CaFo.
Therein, CLIP [57] is pre-trained to produce paired fea-
tures in the embedding space for every image and its de-
scriptive text. Guided by texts with different categor-
ical semantics, CLIP [57] can well classify the images
aided by language-contrastive knowledge . DINO fol-
lows contrastive self-supervised learning [7] to match the
representations between two transformations of one same
image, which is expert at distinguishing different images
with vision-contrastive knowledge . Similar to CLIP [57],
DALL-E [58] is also pre-trained by image-text pairs but
learns to predict the encoded image tokens based on the
given text tokens. Conditioned on the input text, DALL-
E [58] could leverage the vision-generative knowledge to
create high-quality synthetic images in a zero-shot manner.
Pre-trained by large-scale language corpus, GPT-3 [4] takes
a few hand-written templates as input, and autoregressively
generates human-like texts, which contain rich language-
generative knowledge . Therefore, the four models have
distinctive pre-training goals and can provide complemen-
tary knowledge to assist the few-shot visual recognition.
In detail, we cascade them by three steps.: 1) Prompt .
We adopt GPT-3 [4] to produce textual prompts for CLIP
based on a few hand-written templates. These prompts with
richer language knowledge are fed into CLIP’s textual en-
coder. 2) Generate . We adopt DALL-E [58] to gener-
ate additional training images for different categories based
on the domain-specific texts, which enlarges the few-shot
training data, but costs no extra manpower for collection
and annotation. 3) Cache . We utilize a cache model to
adaptively incorporate the predictions from both CLIP [57]
and DINO [7]. Referring to Tip-Adapter [76], we build
the cache model with two kinds of keys respectively for
the two pre-trained models. Regarding zero-shot CLIP as
the distribution baseline, we adaptively ensemble the pre-
dictions of two cached keys as the final output. By only
fine-tuning the lightweight cache model via expanded train-
ing data, CaFo can learn to fuse diverse prior knowledge
and leverage their complementary characteristics for better
few-shot visual recognition.
Our main contributions are summarized as follows:
• We propose CaFo to incorporate the prior knowledge
learned from various pre-training paradigms for better
few-shot learning.
• By collaborating CLIP, DINO, GPT-3 and DALL-E,
CaFo utilizes more semantic prompts, enriches thelimited few-shot training data, and adaptively ensem-
bles diverse predictions via the cache model.
• We conduct thorough experiments on 11 datasets for
few-shot classification, where CaFo achieves state-of-
the-art without using extra annotated data.
|
Xu_Unsupervised_Domain_Adaption_With_Pixel-Level_Discriminator_for_Image-Aware_Layout_Generation_CVPR_2023 | Abstract
Layout is essential for graphic design and poster gen-
eration. Recently, applying deep learning models to gen-
erate layouts has attracted increasing attention. This pa-
per focuses on using the GAN-based model conditioned on
image contents to generate advertising poster graphic lay-
outs, which requires an advertising poster layout dataset
with paired product images and graphic layouts. How-
ever, the paired images and layouts in the existing dataset
are collected by inpainting and annotating posters, respec-
tively. There exists a domain gap between inpainted posters
(source domain data) and clean product images (target do-
main data). Therefore, this paper combines unsupervised
domain adaption techniques to design a GAN with a novel
pixel-level discriminator (PD), called PDA-GAN, to gen-
erate graphic layouts according to image contents. The
PD is connected to the shallow level feature map and com-
putes the GAN loss for each input-image pixel. Both quan-
titative and qualitative evaluations demonstrate that PDA-
GAN can achieve state-of-the-art performances and gener-
ate high-quality image-aware graphic layouts for advertis-
ing posters.
| 1. Introduction
Graphic layout is essential to the design of posters, mag-
azines, comics, and webpages. Recently, generative ad-
versarial network (GAN) has been applied to synthesize
graphic layouts through modeling the geometric relation-
ships of different types of 2D elements, for instance, text
and logo bounding boxes [10, 19]. Fine-grained controls
over the layout generation process can be realized using
Conditional GANs, and the conditions might include image
contents and the attributes of graphic elements, e.g. cat-
egory, area, and aspect ratio [20, 35]. Especially, image
*Work done during an internship at Alibaba Group
†Corresponding author
Logo Text Underlay
Figure 1. Examples of image-conditioned advertising posters
graphic layouts generation. Our model generates graphic lay-
outs (middle) with multiple elements conditioned on product im-
ages (left). The designer or even automatic rendering programs
can utilize graphic layouts to render advertising posters (right).
contents play an important role in generating image-aware
graphic layouts of posters and magazines [34, 35].
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10114
This paper focuses on studying the deep-model based
image-aware graphic layout method for advertising poster
design, where the graphic layout is defined to be a set of el-
ements with their classes and bounding boxes as in [35]. As
shown in Fig. 1, the graphic layout for advertising poster de-
sign in our work refers to arranging four classes of elements,
such as logos, texts, underlays, and other elements for em-
bellishment, at the appropriate position according to prod-
uct images. Therefore, its kernel is to model the relationship
between the image contents and layout elements [4,35] such
that the neural network can learn how to produce the aes-
thetic arrangement of elements around the product image.
It can be defined as the direct set prediction problem in [5].
Constructing a high-quality layout dataset for the train-
ing of image-ware graphic layout methods is labor inten-
sive, since it requires professional stylists to design the ar-
rangement of elements to form the paired product image
and layout data items. For the purpose of reducing work-
load, zhou et.al. [35] propose to collect designed poster im-
ages to construct a dataset with required paired data. Hence,
the graphic elements imposed on the poster image are re-
moved through image inpainting [28], and annotated with
their geometric arrangements in the posters, which results in
state-of-the-art CGL-Dataset with 54,546 paired data items.
While CGL-Dataset is substantially beneficial to the train-
ing of image-ware networks, there exists a domain gap be-
tween product image and its inpainted version. The CGL-
GAN in [35] tries to narrow this domain gap by utilizing
Gaussian blur such that the network can take a clean prod-
uct image as input for synthesizing a high-quality graphic
layout. However, it is possible that the blurred images lose
the delicate color and texture details of products, leading to
unpleasing occlusion or placement of graphic elements.
This paper proposes to leverage unsupervised domain
adaption technique to bridge the domain gap between clean
product images and inpainted images in CGL-Dataset to
significantly improve the quality of generated graphic lay-
outs. Treating the inpainted poster images without graphic
elements as the source domain, our method aims to seek
for the alignment of the feature space of source domain and
the feature space of clean product images in the target do-
main. To this end, we design a GAN with a pixel-level do-
main adaption discriminator, abbreviated as PDA-GAN, to
achieve more fine-grained control over feature space align-
ment. It is inspired by PatchGAN [13], but non-trivially
adapts to pixel-level in our task. First, the pixel-level dis-
criminator (PD) designed for domain adaption can avoid the
Gaussian blurring step in [35], which is helpful for the net-
work to model the details of the product image. Second, the
pixel-level discriminator is connected to the shallow level
feature map, since the inpainted area is usually small rel-
ative to the whole image and will be difficult to discern at
deep levels with large receptive field. Finally, the PD is con-structed by three convolutional layers only, and its number
of network parameters is less than 2%of the discriminator
parameters in CGL-GAN. This design reduces the memory
and computational cost of the PD.
We collect 120,000 target domain images during the
training of PDA-GAN. Experimental results show that
PDA-GAN achieves state-of-the-art (SOTA) performance
according to composition-relevant metrics. It outperforms
CGL-GAN on CGL-dataset and achieves relative improve-
ment over background complexity, occlusion subject de-
gree, and occlusion product degree metrics by 6.21%,
17.5%, and 14.5%relatively, leading to significantly im-
proved visual quality of synthesized graphic layouts in
many cases. In summary, this paper comprises the follow-
ing contributions:
• We design a GAN with a novel pixel-level discrimina-
tor working on shallow level features to bridge the do-
main gap that exists between training images in CGL-
Dataset and clean product images.
• Both quantitative and qualitative evaluations demon-
strate that PDA-GAN can achieve SOTA performance
and is able to generate high-quality image-aware
graphic layouts for advertising posters.
|
Zhang_Dense_Distinct_Query_for_End-to-End_Object_Detection_CVPR_2023 | Abstract
One-to-one label assignment in object detection has suc-
cessfully obviated the need for non-maximum suppression
(NMS) as postprocessing and makes the pipeline end-to-
end. However, it triggers a new dilemma as the widely
used sparse queries cannot guarantee a high recall, while
dense queries inevitably bring more similar queries and en-
counter optimization difficulties. As both sparse and dense
queries are problematic, then what are the expected queries
in end-to-end object detection? This paper shows that the
solution should be Dense Distinct Queries (DDQ). Con-
cretely, we first lay dense queries like traditional detectors
and then select distinct ones for one-to-one assignments.
DDQ blends the advantages of traditional and recent end-
to-end detectors and significantly improves the performance
of various detectors including FCN, R-CNN, and DETRs.
Most impressively, DDQ-DETR achieves 52.1 AP on MS-
COCO dataset within 12 epochs using a ResNet-50 back-
bone, outperforming all existing detectors in the same set-
ting. DDQ also shares the benefit of end-to-end detec-
tors in crowded scenes and achieves 93.8 AP on Crowd-
Human. We hope DDQ can inspire researchers to con-
sider the complementarity between traditional methods and
end-to-end detectors. The source code can be found at
https://github.com/jshilong/DDQ .
* Equal contribution. | 1. Introduction
Object detection is one of the most fundamental tasks in
computer vision, which aims at answering what objects are
in an image and where they are. To achieve the objective,
the detector is expected to detect all objects and mark each
object with only one bounding box.
Due to the complex spatial distribution and the vast
shape variance of objects, detecting all objects is quite chal-
lenging. To solve the problem, traditional detectors [17,21,
27] first lay predefined dense grid queries1to achieve a high
recall. Convolutions with shared weights are then applied to
quickly process dense queries in a sliding-window manner.
At last, one ground truth bounding box is assigned to multi-
ple similar candidate queries for optimization. However,
the one-to-many assignment results in redundant predic-
tions and thus requires extra duplicate-removal operations
(e.g., non-maximum suppression) during inference, which
causes misaligned inference with training and hinders the
pipeline from being end-to-end (as shown in Fig. 1.(a)).
This paradigm is broken by DETR [2], which assigns
only one positive query to each ground truth bounding
box (one-to-one assignment) to achieve end-to-end. This
scheme requires heavy computation to refine queries and
adopts self-attention to model interactions between queries
1 Anchors [17, 21] or anchor points [27] in conventional detectors
play the same role as sparse object queries in [2]. Hence, we collec-
tively refer to densely distributed anchor boxes and anchor points as dense
queries.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7329
to facilitate the optimization of one-to-one assignment,
which unfortunately limits the number of queries. For ex-
ample, DETR only initializes hundreds of learnable object
queries. Therefore, compared to the densely distributed
queries in conventional detectors, the sparse queries fall
short in recall rate, as shown in Fig. 1.(b).
Some recent works have also tried to integrate dense
queries into one-to-one assignment [24, 28, 32]. How-
ever, dense queries in end-to-end detectors face unique
challenges. For example, our analysis shows that this
paradigm would inevitably introduce many similar queries
(potentially representing the same instance) and that it
suffers difficult and inefficient optimization as similar
queries are assigned opposite labels under one-to-one as-
signment.(Fig. 1.(c)).
Now that both sparse queries (low recall) and dense
queries (optimization difficulty) under one-to-one assign-
ment are sub-optimal, what are the expected queries in end-
to-end object detection?
In this study, we demonstrate that the solution should be
dense distinct queries (DDQ) , meaning that the queries for
object detection should be both densely distributed to de-
tect all objects and also distinct from each other to facilitate
the optimization of one-to-one label assignment. Guided
by such a principle, we consistently improve the perfor-
mance of various detector architectures, including FCN,
R-CNN, and DETRs. For one-stage detectors composed
of fully convolutional networks (FCN), we first propose a
pyramid shuffle operation to replace heavy self-attentions to
model the interaction between dense queries. Then, a dis-
tinct queries selection scheme ensures that the one-to-one
assignment is only imposed on the selected distinct queries,
preventing contradictory labels from being assigned to simi-
lar queries. Such an end-to-end one-stage detector is named
DDQ FCN and achieves state-of-the-art performance in
one-stage detectors. DDQ also naturally extends to DETR
and R-CNN structures [7, 19, 26, 38] by first laying dense
queries as in [38] and then selecting distinct queries for
later refining stages, which are respectively dubbed DDQ
R-CNN andDDQ DETR .
We have conducted experiments on two datasets—MS-
COCO [18] and CrowdHuman [23]. DDQ FCN and DDQ
R-CNN obtain 41.5/44.6 AP, respectively, on the MS-
COCO detection dataset [18] with the standard 1x sched-
ule [17, 21]. Compared to recent DETRs, DDQ DETR
achieved 52.1 AP in just 12 epochs with the DETR-style
augmentation [2]. The strong performance demonstrates
that DDQ overcomes the optimization difficulty in end-to-
end detectors and converges as fast as traditional detectors
with higher performance.
Object detection in crowded scenes such as CrowdHu-
man [23] is another arena to testify to the effectiveness
of DDQ. It is extremely cumbersome to tune the post-processing NMS in traditional detectors, as a low IoU
threshold leads to missing overlapping objects, while a high
threshold brings too many false positives. Recent end-to-
end structures also struggle to distinguish between dupli-
cated predictions and overlapping objects due to their diffi-
cult optimization. In this study, DDQ FCN/R-CNN/DETR
achieve 92.7/93.5/93.8 AP and 98.2/98.6/98.7 recall on
CrowdHuman [23], surpassing both traditional and end-to-
end detectors by a large margin.
|
Zala_Hierarchical_Video-Moment_Retrieval_and_Step-Captioning_CVPR_2023 | Abstract
There is growing interest in searching for information
from large video corpora. Prior works have studied rele-
vant tasks, such as text-based video retrieval, moment re-
trieval, video summarization, and video captioning in iso-
lation, without an end-to-end setup that can jointly search
from video corpora and generate summaries. Such an end-
to-end setup would allow for many interesting applications,
e.g., a text-based search that finds a relevant video from
a video corpus, extracts the most relevant moment from
that video, and segments the moment into important steps
with captions. To address this, we present the HIREST
(HIerarchical REtrieval and STep-captioning) dataset and
propose a new benchmark that covers hierarchical infor-
mation retrieval and visual/textual stepwise summarization
from an instructional video corpus. HIREST consists of
3.4K text-video pairs from an instructional video dataset,
where 1.1K videos have annotations of moment spans rel-
evant to text query and breakdown of each moment into
key instruction steps with caption and timestamps (totaling
8.6K step captions). Our hierarchical benchmark consists
of video retrieval, moment retrieval, and two novel moment
segmentation and step captioning tasks. In moment segmen-
tation, models break down a video moment into instruction
steps and identify start-end boundaries. In step caption-
ing, models generate a textual summary for each step. We
also present starting point task-specific and end-to-end joint
baseline models for our new benchmark. While the baseline
models show some promising results, there still exists large
room for future improvement by the community.1
| 1. Introduction
Encouraged by the easy access to smartphones, record-
ing software, and video hosting platforms, people are in-
creasingly accumulating videos of all kinds. To fuel the
*equal contribution
1code and data: https://github.com/j-min/HiRESTsubsequent growing interest in using machine learning sys-
tems to extract and summarize important information from
these large video corpora based on text queries, progress has
been made in video retrieval [2, 17, 18, 41, 42], moment re-
trieval [10, 16, 17], video summarization [9, 24, 33, 34], and
video captioning [13, 20, 41, 42]. Previous works have gen-
erally focused on solving these tasks independently; how-
ever, all these tasks share the common goal of retrieving in-
formation from a video corpus, at different levels of scales
and via different modalities. Hence, in this work, we intro-
duce a new hierarchical benchmark that combines all four
tasks to enable novel and useful real-world applications.
For example, a text-based search service that finds a rele-
vant video from a large video corpus, extracts the most rel-
evant moment from that video, segments the moment into
important steps, and captions them for easy indexing and
retrieval. To support this, we introduce H IREST, a hierar-
chical instructional video dataset for a holistic benchmark
of information retrieval from a video corpus (see Sec. 3).
HIREST consists of four annotations: 1) 3.4Kpairs of text
query about open-domain instructions ( e.g.,‘how to make
glow in the dark slime’ ) and videos, 2) relevant moment
timestamps inside the 1.1Kvideos, where only a part of
the video ( <75%) is relevant to the text query, 3) moment
breakdown in several instructional steps with timestamps
(7.6steps per video, total 8.6Ksteps), and, 4) an manually
curated English caption for each step ( e.g.‘pour shampoo
in container’ ). We collect fine-grained step-wise annota-
tions of H IREST in a two-step annotation process with on-
line crowdworkers on instructional text-video pairs from the
HowTo100M [23] dataset (see Sec. 3.1). The instructional
videos often come with clear step-by-step instructions, al-
lowing fine-grained segmentation of the videos into short
steps. While there are existing video datasets with step an-
notations, they are based on a small number of predefined
task names [36, 46] (thus step captions are not diverse), or
are limited to a single topic ( e.g. cooking [45]). H IREST
covers various domains and provides diverse step captions
with timestamps written by human annotators (see Table 1),
presenting new challenging and realistic benchmarks for hi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23056
1) Video Retrieval
3) Moment Segmentation
4) Step Captioning
Pour shampoo in container...
48s...
63s
...
Pour glow stick in shampoo90s 104s
0s 44s 119s 162s
Retrieved Moment
Segmented MomentQuery: “How to Make Glow in the Dark Slime”
2) Moment Retrieval
...
Break glow stick67s 77s
Figure 1. Overview of four hierarchical tasks of our H IREST dataset (Sec. 3). 1) Video retrieval: find a video that is most relevant to a
given text query. 2) Moment retrieval: choose the relevant span of the video, by trimming the parts irrelevant to the text query. 3) Moment
segmentation: break down the span into several steps and identify the start-end boundaries of each step. 4) Step captioning: generate
step-by-step textual summaries of the moment.
erarchical video information retrieval.
Using the H IREST dataset, we benchmark four tasks:
1) video retrieval, 2) moment retrieval, 3) moment segmen-
tation, and 4) step captioning (see Fig. 1 and Sec. 3.3). In
the video retrieval task, models have to identify a video that
is most relevant to a given text query. In the moment re-
trieval task, models have to select the relevant span of the
video, by trimming the parts irrelevant to the text query
(blue boundary in Fig. 1). In the moment segmentation task,
models have to break down the relevant portion into sev-
eral instructional steps and identify the start-end boundaries
of each step (green boundaries in Fig. 1). Finally, in the
step captioning task, models have to generate step captions
(e.g.‘spray the warm water on carpet’ ) of the instructional
steps. To provide good starting points to the community
for our new task hierarchy, we show the performance of re-
cent baseline models on H IREST. For baselines, we use
strong models including CLIP [27], EV A-CLIP [8], Frozen-
in-Time [2], BMT [13], and SwinBERT [20]. On all four
tasks, we find that finetuning models on H IREST improve
performance; however, there exists a large room to improve
performance.
We summarize our contributions in this paper: 1) We
present H IREST dataset and propose a new benchmark that
covers hierarchy in information retrieval and visual/textual
summarization from an instructional video corpus. 2) Un-
like existing video datasets with step captions based on
predefined task names or limited to a single topic, our
HIREST provides diverse, high-quality step captions with
timestamps written by human annotators. 3) We provide
a joint baseline model that can perform moment retrieval,
moment segmentation, and step captioning with a single ar-
chitecture. 4) We provide comprehensive dataset analysesand show experiments with baseline models for each task,
where there is a large room to improve model performance.
We hope that H IREST can foster future work on end-to-end
systems for holistic information retrieval and summariza-
tion on large video corpus. In addition, our manually an-
notated step captions can also be a good source for training
and testing the step-by-step reasoning of large multimodal
language models [40, 44].
|
Xu_Meta_Compositional_Referring_Expression_Segmentation_CVPR_2023 | Abstract
Referring expression segmentation aims to segment an
object described by a language expression from an image.
Despite the recent progress on this task, existing models
tackling this task may not be able to fully capture semantics
and visual representations of individual concepts, which
limits their generalization capability, especially when han-
dling novel compositions of learned concepts . In this work,
through the lens of meta learning, we propose a Meta Com-
positional Referring Expression Segmentation ( MCRES )
framework to enhance model compositional generalization
performance. Specifically, to handle various levels of novel
compositions, our framework first uses training data to con-
struct a virtual training set and multiple virtual testing sets,
where data samples in each virtual testing set contain a
level of novel compositions w.r.t. the virtual training set.
Then, following a novel meta optimization scheme to opti-
mize the model to obtain good testing performance on the
virtual testing sets after training on the virtual training set,
our framework can effectively drive the model to better cap-
ture semantics and visual representations of individual con-
cepts, and thus obtain robust generalization performance
even when handling novel compositions. Extensive experi-
ments on three benchmark datasets demonstrate the effec-
tiveness of our framework.
| 1. Introduction
Referring expression segmentation (RES) [12, 38, 40]
aims to segment a visual entity in an image given a lin-
guistic expression. This task has been receiving increas-
ing attention in recent years [5, 18, 34, 37], as it can play
an important role in various applications, such as language-
based human-robot interaction and interactive image edit-
*Corresponding Author
T r aining Data T esting Data
No v el Composition
"dark horse"
"t he coff ee mug""mug wit h
dark coff ee"
(a)
w or d-w or d
w or d-phr ase
phr ase-phr ase
bir d whit ewhit e bir d standing in wat er
standing in wat er
wat er in“whit e bir d standing in wat er ”
(b)
Figure 1. Illustration of novel compositions and various levels
of compositions. (a) An example of the testing sample containing
the novel composition of “dark coffee” in RefCOCO dataset [41].
Such a novel composition itself does not exist in the training data,
but its individual components (i.e., “dark” and “coffee”) exist in
the training data. (b) An example of various levels of compositions
in an expression. We perform constituency parsing of the expres-
sion using AllenNLP [9]. Based on the obtained parsing tree as
shown above, we can then obtain various levels of compositions
(e.g., word-word level, word-phrase level) in this expression.
ing. However, despite the recent progress on tackling this
task [5, 34, 42], existing methods may struggle with han-
dling the testing samples, of which the expressions contain
novel compositions of learned concepts. Here a novel com-
position means that the composition itself does not exist in
the training data, but its individual components (e.g., words,
phrases) exist in the training data, as shown in Fig. 1a.
We observe that testing samples containing such novel
compositions of learned concepts widely exist in RES
datasets [26, 28, 41]. However, existing RES models may
not be able to well handle novel compositions during test-
ing. Here we test the generalization capability of multi-
ple state-of-the-art models [25, 37, 42] in terms of handling
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19478
novel compositions, as shown in Table 2. Specifically, we
first split each testing set of the RefCOCO dataset. In each
testing set, one split subset includes the data samples, in
which all the contained compositions are seen in the Ref-
COCO training set. While another subset includes the data
samples containing novel compositions, of which the indi-
vidual components (e.g., words, phrases) exist in the Re-
fCOCO training set but the composition itself is unseen
in the training set, i.e., containing novel compositions of
learned concepts. Then we evaluate the models [25, 37, 42]
on the two subsets in each testing set, and find that for
each model, its testing performance on the subset contain-
ing novel compositions drops obviously compared to the
performance on the other subset. For these models, the per-
formance gap between the two subsets can reach 14%−17%
measured by the metric of overall IoU. Such a clear perfor-
mance gap indicates that existing models struggle with gen-
eralizing to novel compositions of learned concepts. This
might due to that the model does not effectively capture the
semantics and visual representations of individual concepts
(e.g., “dark”, “coffee” in Fig. 1a) during training. Then
the trained model may fail to recognize a novel composi-
tion (e.g., “dark coffee”) at testing time, which though is
composed of learned concepts.
Thus to handle this issue, we aim to train the model to
effectively capture the semantics and visual representations
of individual concepts during training. Despite the concep-
tual simplicity, how to guide the model’s learning behavior
towards this goal is a challenging problem. Here from the
perspective of meta learning, we propose a Meta Composi-
tional Referring Expression Segmentation ( MCRES )frame-
work, to effectively handle such a challenging problem by
only changing the model training scheme.
Meta learning proposes to perform virtual testing during
model training for better performance [7, 29]. Inspired by
this, to improve the generalization capability of RES mod-
els, our MCRES framework incorporates a meta optimiza-
tion scheme that consists of three steps: virtual training ,
virtual testing andmeta update . Specifically, we first split
the training set to construct a virtual training set for virtual
training, and a virtual testing set for virtual testing. The
data samples in the virtual testing set contain novel com-
positions w.r.t. the virtual training set. For example, if the
expressions of data samples in the virtual training set con-
tain both words “dark” and “coffee” but do not contain their
composition (i.e., “dark coffee”), the virtual testing set can
include this novel composition correspondingly.
Based on the constructed virtual training set and virtual
testing set, we first train the model using the virtual train-
ing set, and then evaluate the trained model on the virtual
testing set. During virtual training, the model may learn
the compositions of individual concepts as a whole with-
out truly understanding the semantics and visual representa-tions of individual concepts, which though can still improve
model training performance. For example, if there are many
training samples containing the composition of “yellow ba-
nana” in the virtual training set, the model can superficially
correlate “banana” with “yellow” and learn this composi-
tion as a whole, since using such spurious correlations can
facilitate the model learning [1, 10, 39]. However, learning
the compositions as a whole over the virtual training set may
not improve model performance much on the virtual testing
set in virtual testing, since the virtual testing set contains
novel compositions w.r.t. the virtual training set. Thus to
achieve good testing performance on such a virtual testing
set, the model needs to effectively capture semantics and
visual representations of individual concepts during virtual
training. In this way, the model testing performance on the
virtual testing set serves as a generalization feedback to the
model virtual training process.
Thus after the virtual training and virtual testing, we
can further update the model to obtain better testing per-
formance on the virtual testing set (i.e., meta update), so
as to drive the model training on the virtual training set to-
wards the direction of learning to capture semantics and vi-
sual representations of individual concepts, i.e., learning to
learn. In this manner, our framework is able to optimize the
model for robust generalization performance, even tackling
the challenging testing samples with novel compositions.
Moreover, given that expressions can often be hierarchi-
cally decomposed, there can exist various levels of novel
compositions. Specifically, to identify meaningful compo-
sitions in an expression, we can parse an expression into a
tree structure based on the constituency parsing tool [9] as
shown in Fig.1b. In such a parsing tree, under the same par-
ent node, each pair of child nodes (e.g., “white” and “bird”)
are closely semantically related, and thus can form a mean-
ingful composition. Since each child node can be a word or
a phrase as in Fig. 1b, there can naturally exist the following
three levels of novel compositions: word-word level (e.g.,
“white” and “bird”), word-phrase level (e.g., “standing” and
“in water”) and phrase-phrase level (e.g., “white bird” and
“standing in water”), which correspond to different levels of
comprehension complexity. To better handle such a range
of novel compositions, we construct multiple virtual test-
ing sets in our framework, where each virtual testing set is
constructed to handle one level of novel compositions.
Our framework only changes the model training scheme
without the need to change the model structure. Thus our
framework is general, and can be conveniently applied on
various RES models. We test our framework on multiple
models, and obtain consistent performance improvement.
The contributions of our work are threefold: 1) We
propose a novel framework (MCRES) to effectively im-
prove generalization performance of RES models, espe-
cially when handing novel compositions of learned con-
19479
cepts. 2) Via constructing a virtual training set and mul-
tiple virtual testing sets w.r.t. various levels of novel com-
positions, our framework can train the model to well han-
dle various levels of novel compositions. 3) When applied
on various models on three RES benchmarks [28, 41], our
framework achieves consistent performance improvement.
|
Zhang_Frame-Event_Alignment_and_Fusion_Network_for_High_Frame_Rate_Tracking_CVPR_2023 | Abstract
Most existing RGB-based trackers target low frame rate
benchmarks of around 30 frames per second. This setting
restricts the tracker’s functionality in the real world, espe-
cially for fast motion. Event-based cameras as bioinspired
sensors provide considerable potential for high frame rate
tracking due to their high temporal resolution. However,
event-based cameras cannot offer fine-grained texture in-
formation like conventional cameras. This unique comple-
mentarity motivates us to combine conventional frames and
events for high frame rate object tracking under various
challenging conditions. In this paper, we propose an end-to-
end network consisting of multi-modality alignment and fu-
sion modules to effectively combine meaningful information
from both modalities at different measurement rates. The
alignment module is responsible for cross-style and cross-
frame-rate alignment between frame and event modalities
under the guidance of the moving cues furnished by events.
While the fusion module is accountable for emphasizing
valuable features and suppressing noise information by the
mutual complement between the two modalities. Exten-
sive experiments show that the proposed approach outper-
forms state-of-the-art trackers by a significant margin in
high frame rate tracking. With the FE240hz dataset, our
approach achieves high frame rate tracking up to 240Hz.
| 1. Introduction
Visual object tracking is a fundamental task in computer
vision, and deep learning-based methods [7, 9,10,15,35,56]
have dominated this field. Limited by the conventional sen-
sor, most existing approaches are designed and evaluated on
benchmarks [13, 24,38,53] with a low frame rate of approx-
imately 30 frames per second (FPS). However, the value
of a higher frame rate tracking in the real world has been
proved [16, 21–23]. For example, the shuttlecock can reach
speeds of up to 493km/h , and analyzing its position is es-
sential for athletes to learn how to improve their skills [46].
Utilizing professional high-speed cameras is one strategy
⋆Xin Yang ([email protected]) is the corresponding author.
ToMP-MF DeT HMFT FENet AFNet GTFigure 1. A comparison of our AFNet with SOTA trackers. All
competing trackers locate the target at time t+ ∆t with conven-
tional frames at time tand aggregated events at time t+ ∆t as
inputs. Our method achieves high frame rate tracking up to 240Hz
on the FE240hz dataset. The two examples also show the comple-
mentary benefits of both modalities. (a) The event modality does
not suffer from HDR, but the frame does; (b) The frame modality
provides rich texture information, while the events are sparse.
for high frame rate tracking, but these cameras are inac-
cessible to casual users. Consumer devices with cameras,
such as smartphones, have made attempts to integrate sen-
sors with similar functionalities into their systems. How-
ever, these sensors still suffer from large memory require-
ments and high power consumption [49].
As bio-inspired sensors, event-based cameras measure
light intensity changes and output asynchronous events to
represent visual information. Compared with conventional
frame-based sensors, event-based cameras offer a high mea-
surement rate (up to 1MHz), high dynamic range (140 dB
vs. 60 dB), low power consumption, and high pixel band-
width (on the order of kHz) [14]. These unique proper-
ties offer great potential for higher frame rate tracking in
challenging conditions. Nevertheless, event-based cameras
cannot measure fine-grained texture information like con-
ventional cameras, thus inhibiting tracking performance.
Therefore, in this paper, we exploit to integrate the valuable
information from event-based modality with that of frame-
based modality for high frame rate single object tracking
under various challenging conditions.
To attain our objective, two challenges require to be ad-
dressed: (i) The measurement rate of event-based cameras
is much higher than that of conventional cameras. Hence
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9781
for high frame rate tracking, low-frequency frames must be
aligned with high-frequency events to disambiguate target
locations. Although recent works [34, 45,48,50] have pro-
posed various alignment strategies across multiple frames
for video-related tasks, they are specifically designed for
conventional frames of the same modality at different mo-
ments. Thus, applying these approaches directly to our
cross-modality alignment does not offer an effective solu-
tion. (ii) Effectively fusing complementary information be-
tween modalities and preventing interference from noise is
another challenge. Recently, Zhang et al. [61] proposed
a cross-domain attention scheme to fuse visual cues from
frame and event modalities for improving the single object
tracking performance under different degraded conditions.
However, the tracking frequency is bounded by the conven-
tional frame rate since they ignore the rich temporal infor-
mation recorded in the event modality.
To tackle the above challenges, we propose a novel end-
to-end framework to effectively combine complementary
information from two modalities at different measurement
rates for high frame rate tracking, dubbed AFNet, which
consists of two key components for alignment and fusion,
respectively. Specifically, (i) we first propose an event-
guided cross-modality alignment (ECA) module to simulta-
neously accomplish cross-style alignment and cross-frame-
rate alignment. Cross-style alignment is enforced by match-
ing feature statistics between conventional frame modality
and events augmented by a well-designed attention scheme;
Cross-frame-rate alignment is based on deformable convo-
lution [8] to facilitate alignment without explicit motion es-
timation or image warping operation by implicitly focusing
on motion cues. (ii) A cross-correlation fusion (CF) mod-
ule is further presented to combine complementary infor-
mation by learning a dynamic filter from one modality that
contributes to the feature expression of another modality,
thereby emphasizing valuable information and suppress-
ing interference. Extensive experiments on different event-
based tracking datasets validate the effectiveness of the pro-
posed approach (see Figure 1as an example).
In summary, we make the following contributions:
•Our AFNet is, to our knowledge, the first to combine
the rich textural clues of frames with the high temporal reso-
lution offered by events for high frame rate object tracking.
•We design a novel event-guided alignment framework
that performs cross-modality and cross-frame-rate align-
ment simultaneously, as well as a cross-correlation fusion
architecture that complements the two modalities.
•Through extensive experiments, we show that the pro-
posed approach outperforms state-of-the-art trackers in var-
ious challenging conditions. |
Zhang_Efficient_Map_Sparsification_Based_on_2D_and_3D_Discretized_Grids_CVPR_2023 | Abstract
Localization in a pre-built map is a basic technique for
robot autonomous navigation. Existing mapping and local-
ization methods commonly work well in small-scale envi-
ronments. As a map grows larger, however, more mem-
ory is required and localization becomes inefficient. To
solve these problems, map sparsification becomes a prac-
tical necessity to acquire a subset of the original map for
localization. Previous map sparsification methods add a
quadratic term in mixed-integer programming to enforce a
uniform distribution of selected landmarks, which requires
high memory capacity and heavy computation. In this pa-
per, we formulate map sparsification in an efficient linear
form and select uniformly distributed landmarks based on
2D discretized grids. Furthermore, to reduce the influence
of different spatial distributions between the mapping and
query sequences, which is not considered in previous meth-
ods, we also introduce a space constraint term based on 3D
discretized grids. The exhaustive experiments in different
datasets demonstrate the superiority of the proposed meth-
ods in both efficiency and localization performance. The
relevant codes will be released at https://github.
com/fishmarch/SLAM_Map_Compression .
| 1. Introduction
To realize autonomous navigation for robots, localiza-
tion in a pre-built map is a basic technique. Lots of algo-
rithms have been proposed for mapping and localization us-
ing different sensors, including camera [20] and lidar [27].
These algorithms commonly work well in some small-scale
environments now. When applied in large-scale environ-
ments or in long-term, however, new challenges appear and
need to be settled for practical applications. Some of these
problems are time and memory consuming.
When using cameras, visual simultaneous localization
and mapping (SLAM) is a commonly used method to build
maps. In visual SLAM, redundant features are extracted
from images and then constructed as landmarks in a map,
Figure 1. Localization results in a compact map. The original
map is constructed from sequence 00 of the KITTI dataset [14].
The original map consists of 141K landmarks, indicated by gray
points; while the compact map consists of only 5K landmarks,
indicated by blue points. 96.15% of the query images are localized
successfully in the compact map, indicted by green poses.
such that camera poses can be tracked robustly and accu-
rately. These redundant landmarks promise good localiza-
tion results in small-scale environments. As more and more
images are received when working in large-scale environ-
ments or in long-term, however, memory consumption is
increasing unboundedly. Localizing in such large maps will
also be more time-consuming. These problems are espe-
cially severe for some low-cost robots.
Actually, not all landmarks are necessary for robots to
localize in pre-built maps. In theory, even only 4 matched
landmarks can determine a camera pose using EPnP [16]
(more landmarks are commonly used for robust and ac-
curate estimation). This reveals that maps can be com-
pressed and still retain comparable performance for local-
ization. Map compression can be classified into two types:
descriptor compression [5, 18, 22] and landmark sparsifica-
tion [17, 23]. The research of this paper falls in the latter
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12470
one, which is to find a subset of an original map while main-
taining comparable localization performance. A subset map
is called compact map in this paper. For example, as indi-
cated in Fig. 1, only 3.91% of the original landmarks are
selected as the compact map, in which more than 96% of
the query images can still be localized successfully.
To select an optimal subset for localization, map sparsi-
fication is related to a K-cover problem [17], which means
the number of landmarks in a compact map is minimized
while keeping the number of associated landmarks in each
image larger than a threshold (i.e., K). To solve the K-
cover problem, it can be formulated as mixed-integer lin-
ear programming, through which an optimal subset is ob-
tained. The original formulation only considers the number
of landmarks for localization, while their distribution also
affects localization performance. Therefore, some works
design and add quadratic terms, formulating mixed-integer
quadratic programming to enforce a more uniform distri-
bution of selected landmarks [11, 23]. However, these
quadratic terms slow down the optimization speed heav-
ily. The required high memory capacity and heavy com-
putation become severe limitations of these map sparsifica-
tion methods. For example, in our experiments, the mixed-
integer quadratic programming methods cannot be used for
the maps containing more than 55K landmarks, because all
computer memory has been consumed.
To select uniformly distributed landmarks and at the
same time maintain the computation efficiency, we keep
map sparsification formulation in a linear form in this pa-
per. We firstly discretize images into 2D fix-sized grids.
Then for all observed landmarks, we can find which cells
they fall in. Therefore, more occupied cells reflect a more
uniform distribution of landmarks. This can be formulated
in a linear form easily. In this way, uniformly distributed
landmarks are selected efficiently, and thus the localization
performance in compact maps will be better.
Another severe limitation is that all of past works assume
the spatial distribution of the query sequence is close to that
of the mapping sequence. Then landmarks are selected only
based on their association with the images of the mapping
sequence. However, this assumption cannot be guaranteed
in real robotic applications. The perspective difference be-
tween query and mapping sequences may cause the local-
ization in compact maps to fail unexpectedly.
To ensure more query images from whole 3D space can
be localized successfully in compact maps, we propose to
select landmarks based on not only their association with
mapping images but also their visibility in 3D space. The
visibility region of a landmark is defined based on its view-
ing angle and distance. The 3D space is also discretized
into a 3D discretized grid. For each 3D cell, all visible land-
marks are collected, and a constraint on the number of visi-
ble landmarks is added into map sparsification formulation.In this way, landmarks are selected to maintain localization
performance for query images from the whole space.
In summary, the contributions of this paper are as fol-
lows: 1) We propose an efficient map sparsification method
formulating uniform landmark distribution in a linear form
to keep the computation efficiency. 2) We propose to per-
form map sparsification involving the visibility of land-
marks to achieve better localization results for the query
images from the whole space. 3) We conduct exhaustive ex-
periments in different datasets and compare with other state-
of-the-art methods, showing the effectiveness and superior-
ity of our methods for different kinds of query sequences.
|
Zhang_Skinned_Motion_Retargeting_With_Residual_Perception_of_Motion_Semantics__CVPR_2023 | Abstract
A good motion retargeting cannot be reached without
reasonable consideration of source-target differences on
both the skeleton and shape geometry levels. In this work,
we propose a novel Residual RET argeting network (R2ET)
structure, which relies on two neural modification modules,
to adjust the source motions to fit the target skeletons and
shapes progressively. In particular, a skeleton-aware mod-
ule is introduced to preserve the source motion semantics. A
shape-aware module is designed to perceive the geometries
of target characters to reduce interpenetration and contact-
missing. Driven by our explored distance-based losses that
explicitly model the motion semantics and geometry, these
two modules can learn residual motion modifications on
the source motion to generate plausible retargeted motion
in a single inference without post-processing. To balance
these two modifications, we further present a balancing
gate to conduct linear interpolation between them. Ex-
tensive experiments on the public dataset Mixamo demon-
strate that our R2ET achieves the state-of-the-art perfor-
mance, and provides a good balance between the preserva-
tion of motion semantics as well as the attenuation of in-
terpenetration and contact-missing. Code is available at
https://github.com/Kebii/R2ET .
| 1. Introduction
As a process of mapping the motion of a source charac-
ter to a target character without losing plausibility, motion
retargeting is a long-standing problem in the community of
computer vision and computer graphics. It has a wide spec-
trum of applications in game and animation industry, and
is a cornerstone of the digital avatar and metaverse tech-
nologies [25]. In recent years, learning-based retargeting
methods started sparkling in the community. Among them,
the neural motion retargeting [1, 16, 24, 25], which has ad-
vantages in intelligent perception and stable inference, be-
*Most of this work was done during Jiaxu’s internship at Tencent.
†Corresponding author: [email protected]
5678910
0 0.5 1 1.5 2 2.5Skeleton -aware
Source Copy Ours
MSEInterpenetration (%) OursNKN
PMnetSAN
Motion Retargeting
Ours Copy SourceShape -aware
Semantics
Geometry
PMnet*Figure 1. Our R2ET fully considers the source-target differences
on both the skeleton and shape geometry levels. The retargeted
motion of R2ET preserves motion semantics, eliminates interpen-
etration, and keeps self-contact without post-optimizations.
comes a new research trend. The previous learning-based
methods utilize a full-motion mapping structure, which de-
codes joint rotations of the target skeleton as outputs, with
joint positions [16, 24, 25] or joint rotations [1] as inputs.
However, due to the gap between the Cartesian coordinate
space and the rotation space, the full joint position encoding
unavoidably introduces motion distortion. Meanwhile, the
full joint rotation encoding always leads to discontinuity in
the rotation space [1, 26].
In animation, we observe that artists usually copy the
motion of the source character, and then manually modify it
to preserve motion semantics and avoid translation artifacts,
e.g., interpenetration, during motion reuse in new charac-
ters. Inspired by this observation, we design a Residual
RET argeting network (R2ET) with a residual structure for
motion retargeting. This structure takes the source motion
as initialization and involves neural networks to imitate the
modifications from animators, as illustrated in Figure 1.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13864
With this design, the coherence of the source motion is well
maintained, and the search space for retargeting solutions
during training is effectively reduced.
The key to achieving physically plausible single-body
motion retargeting is to understand two main differences be-
tween the source and target characters, 1) the differences in
bone length ratio; 2) the differences in body shape geome-
try. To reach this goal, we explore two modification mod-
ules, i.e., the skeleton-aware module and the shape-aware
module, to perceive the two differences.
On the skeleton level, the skeleton-aware module takes
the skeleton configurations as input to assist the transfer of
the source motion semantics, such as arm folding and hand
clapping, to the target character. To overcome the lack of
paired and semantics-correct ground truth, we directly take
the supervision signal from the input source motion. The
motion semantics is explicitly modeled as a normalized Dis-
tance Matrix (DM) of skeleton joints. Accordingly, the se-
mantics preservation is achieved by aligning the DM be-
tween the source and target motions (Figure 1- Semantics ).
On the shape geometry level, the shape-aware module
senses the compatibility between the target character mesh
and the skeleton adjusted after motion semantics preserva-
tion to avoid interpenetration and contact-missing. To train
the module end-to-end, we introduce two voxelized Dis-
tance Fields, i.e., the Repulsive Distance Field (RDF) and
the Attractive Distance Field (ADF) (Figure 1- Geometry ),
as the measurement tools for interpenetration and contact.
We sample the distance of the query vertices on the target
character mesh to the body surface in these two fields to es-
timate the degree of interpenetration and contact. With this
manner, the whole process is differentiable during training.
In practice, we find there always exists a contradic-
tion between the preservation of motion semantics and the
avoidance of interpenetration. We, therefore, propose a bal-
ancing gate to make a trade-off between the skeleton-level
and geometry-level modifications by learning an adjusting
weight. By leaving the weight to the user, our R2ET also
accepts interactive fine control from users.
With the above main designs, our R2ET preserves the
motion semantics of the source character and avoids in-
terpenetration and contact-missing issues in a single-pass
without post-processing. We evaluate our method on var-
ious complex motion sequences and a wide range of char-
acter geometries from skinny to bulky. The qualitative and
quantitative results show that our R2ET outperforms the ex-
isting learning-based methods by a large margin. The con-
tributions of this work are summarized in three-fold:
• A novel residual network structure is proposed for
neural motion retargeting, which involves a skeleton-
aware modification module, a shape-aware modifica-
tion module, and a balancing gate.
• A normalized joint Distance Matrix is presented toguide the training of the skeleton-aware module for
explicit motion semantics modeling, and two Distance
Fields are introduced to achieve differentiable pose ad-
justment learning.
• Extensive experiments on the Mixamo [2] dataset
demonstrate that our R2ET achieves the state-of-the-
art performance qualitatively and quantitatively.
|
Yang_Good_Is_Bad_Causality_Inspired_Cloth-Debiasing_for_Cloth-Changing_Person_Re-Identification_CVPR_2023 | Abstract
Entangled representation of clothing and identity (ID)-
intrinsic clues are potentially concomitant in conventional
person Re-IDentification (ReID). Nevertheless, eliminating
the negative impact of clothing on ID remains challenging
due to the lack of theory and the difficulty of isolating theexact implications. In this paper , a causality-based Auto-
Intervention Model, referred to as AIM1, is first proposed to
mitigate clothing bias for robust cloth-changing person ReID
(CC-ReID). Specifically, we analyze the effect of clothing
on the model inference and adopt a dual-branch model to
simulate causal intervention. Progressively, clothing biasis eliminated automatically with model training. AIM is
encouraged to learn more discriminative ID clues that are
free from clothing bias. Extensive experiments on two stan-
dard CC-ReID datasets demonstrate the superiority of the
proposed AIM over other state-of-the-art methods.
| 1. Introduction
Short-term person Re-IDentification (ReID) aims to
match a person within limited time and space conditions,
under the assumption that each individual maintains clothing
consistency. Of both traditional and deep learning methods,
the best way to deceive current ReID models is by having
pedestrians alter their clothing. This highlights the inade-quacy of existing short-term ReID methods [ 4,42,45]. To
solve this issue, Cloth-Changing person ReID (CC-ReID) [ 2]
has been recently explored, which is increasingly critical in
public security systems for tracking down disguised criminal
suspects. For example, witnesses typically provide descrip-
tive details ( e.g., clothing, color, and stature) when describ-
ing suspects, but it is unlikely that criminals will wear the
same clothes upon their reappearance. It follows that cloth-
ing information will disrupt the existing ReID system [ 40],
*Corresponding author: [email protected] will publicly
available at https://github.com/BoomShakaY/AIM-CCReID .ID classification
ID B
(b) Clothing affect person recognition
ID A ID C
ProbeGallery
match
0.20.40.60.8
ID C(a) Entangled representation of clothing and ID
ID
Non-ID
(Clothing, et al.)
Entangled Camera #1
ID
Non-ID
(Clothing, et al.)
Latent space
Person #1Entangled
match
Person #1Camera #2
Figure 1. Illustration of the entangled representation of clothing
and ID and how clothing bias affects model prediction.
which leads to a growing need felt by the research commu-
nity to study CC-ReID task.
As one of the accompanying objects of people, clothing
is an essential factor in social life. There are two possible
responses when people identify others: confusing the per-ception of identity (ID) or clearly distinguishing differentIDs through immutable visual appearance (faces or soft-
biometrics). The former manifests as the mix-up of IDs due
to the similarity in flexible visual appearance ( e.g., cloth-
ing) of people, while the latter is caused by the high-level
semantic ( e.g., ID-clues) perceived by humans, transcending
the similarity that comes with clothing. The above reactions
reflect that the relevance of clothing to ID is a double-edged
sword. Traditionally, clothing is a helpful characteristic for
ReID, where people wearing the same clothes are likely
to have the same ID. However, entangled representation of
clothing and ID leads the statistical-based neural networks
to converge towards shallow clothing features that can be
easily distinguished. This statistical association gives theReID model a faulty perception that there is a strong rela-
tion between clothing and ID, which would undermine the
ultimate prediction for seeking robust and sensible results.
Recent years have witnessed numerous deep learning at-
tempts to model discriminative clues for person distinguish-
able learning. However, plenty of misleading information
exists in these attempts, as some non-ID areas ( e.g., clothing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1472
and background) may correlate highly with ID. As shown
in Fig. 1(a), conventional ReID methods focus on image
regions with distinct discrimination characteristics [ 33,46],
leading to the entanglement of clothing and ID-intrinsic
clues. In CC-ReID, this phenomenon will lead to biasedclassification, misleading the ReID model to focus on the
non-ID areas that appear to be ID-related. As shown in
Fig. 1(b), clothing may mislead the classifier by giving high
scores to person images with similar colors or patterns, but
ignoring the faces and details that matter. To this end, if
clothing bias can be captured and removed from the exist-ing model, it will enhance the contribution of genuinely
ID-relevant features to ID discrimination.
Lacking practical tools to alleviate clothing bias makes it
challenging to correct the misleading attention of the current
ReID model. Even knowing that clothing is a critical influ-
encing factor, it is not apparent how to intervene in clothing
directly in the representation space. Not to mention thatrough negligence on clothing will damage the integrity of
person representation, e.g., the mask-based [ 12,16,41] and
gait-based methods [ 6,17] try to obtain cloth-agnostic rep-
resentations by forcibly covering up or neglecting clothing
information. While effective, these straightforward methods
lose a plethora of semantic information and overlook the
factual relation between clothing and real ID.
Causal inference is a recently emerging theory [ 19]
widely used to extract causality [ 10] and explore the true
association between two events [ 25]. Thanks to causal in-
ference, we can revisit the issue of clothing bias from a
causal perspective. The ladder of causality [ 7] divides cogni-
tive abilities into three levels from low to high: association,
intervention, and counterfactual analysis. Many aforemen-
tioned research works explore CC-ReID from the surface
level of data association, while more advanced cognitive is
not covered. Intervention allows us to incorporate clothing
knowledge into the model prediction and eliminate the cor-
responding effects, in contrast to counterfactual data, which
are difficult to obtain under strict variable control. Therefore,
this paper attempts to start with intervention by examining
the perturbation of clothing on the results and removing such
perturbation from model predictions. Through the causalintervention, we attempt to remove the effect of clothing
without destroying semantic integrity and further optimizing
the learned discriminative features.
To bring the theoretical intervention into practice, we de-
sign a dual-branch model to capture clothing bias and ID
clues separately and strip clothing inference from ID repre-
sentation learning to simulate the entire intervention process.
The clothing branch represents the model’s perception of
clothing, breaking the false association between clothing and
ID brought by the entangled representation. Subsequently,
while maintaining semantic integrity, this paper achieves bias
elimination and improves the robustness of ID representa-tion by constantly mitigating the influence of clothing on ID
classification. Further, to improve the accuracy of clothing
bias distillation, as clothing has top-middle-bottom charac-
teristics, this paper adopts pyramid matching strategy [ 8]t o
enhance the partial feature representation of clothing. Ad-
ditionally, we introduce two learning objectives explicitly
designed to encourage clothing mitigation. A knowledge
transfer objective is adopted to strengthen the perception of
clothing bias entangled with ID-intrinsic representation. A
bias elimination objective is utilized to cooperate with the
causal auto-intervention for ID-intrinsic feature extraction.
Our contributions can be summarized threefold:
•We propose a novel causality-based Auto-Intervention
Model (AIM) for Cloth-Changing person Re-
IDentification (CC-ReID). The proposed AIM
guarantees that the learned representation is unaffectedby clothing bias. To the best of our knowledge, AIM is
the first model to introduce causality into CC-ReID.
•A dual-branch model is proposed to simulate the causal
intervention. Clothing bias is gradually stripped from
the entangled ID-clothing representation without de-stroying semantic integrity, which optimizes the ID-
intrinsic feature learning.
•We comprehensively demonstrate how clothing bias
affects the current ReID model and highlight the signif-
icance of causal inference in CC-ReID. The experimen-
tal results on two CC-ReID datasets, PRCC-ReID [ 38]
and LTCC-ReID [ 26], show that AIM outperforms state-
of-the-art methods.
|
Yang_Semantic_Human_Parsing_via_Scalable_Semantic_Transfer_Over_Multiple_Label_CVPR_2023 | Abstract
This paper presents Scalable Semantic Transfer (SST),
a novel training paradigm, to explore how to leverage the
mutual benefits of the data from different label domains (i.e.
various levels of label granularity) to train a powerful hu-
man parsing network. In practice, two common applica-
tion scenarios are addressed, termed universal parsing and
dedicated parsing , where the former aims to learn homoge-
neous human representations from multiple label domains
and switch predictions by only using different segmentation
heads, and the latter aims to learn a specific domain pre-
diction while distilling the semantic knowledge from other
domains. The proposed SST has the following appealing
benefits: (1)it can capably serve as an effective train-
ing scheme to embed semantic associations of human body
parts from multiple label domains into the human repre-
sentation learning process; (2)it is an extensible semantic
transfer framework without predetermining the overall rela-
tions of multiple label domains, which allows continuously
adding human parsing datasets to promote the training. (3)
the relevant modules are only used for auxiliary training
and can be removed during inference, eliminating the extra
reasoning cost. Experimental results demonstrate SST can
effectively achieve promising universal human parsing per-
formance as well as impressive improvements compared to
its counterparts on three human parsing benchmarks (i.e.,
PASCAL-Person-Part, ATR, and CIHP). Code is available
athttps://github.com/yangjie-cv/SST .
| 1. Introduction
Human parsing, which aims to assign pixel-wise cate-
gory predictions for human body parts, has played a criti-
cal role in human-oriented visual content analysis, editing,
and generation, e.g., virtual try-on [7], human motion trans-
fer [14], and human activity recognition [8]. Prior methods
*Corresponding author.have chronically been dominated by developing powerful
network architectures [11, 18, 24, 25, 32]. However, these
network designs usually serve as general components for
human parsing in a specific labeling system, and how to use
the data from multiple label domains to further improve the
accuracy of the parsing network is still an open issue.
Recent studies [12, 20] have proposed using multiple la-
bel domains to train a universal parsing network by con-
structing graph neural networks that learn semantic associ-
ations from different label domains. Graphonomy [10, 20]
investigates cross-domain semantic coherence through se-
mantic aggregation and transfer on explicit graph structures.
Grapy-ML [12] extends this method by stacking three levels
of human part graph structures, from coarse to fine granu-
larity. Despite these graph-based methods consider prior
human body structures, such as the spatial associations of
human parts and the semantic associations of different la-
bel domains as in Fig. 1-(a), they have limitations in their
ability to dynamically add new label domains due to their
dependence on pre-determined graph structures. Addition-
ally, explicit graph reasoning incurs extra inference costs,
making it impractical in some scenarios. Therefore, there
is a need for a more general, scalable, and simpler manner
to harness the mutual benefits of data from multiple label
domains to enhance given human parsing networks.
In this work, we propose Scalable Semantic Transfer
(SST), a novel training paradigm that builds plug-and-play
modules to effectively embed semantic associations of hu-
man body parts from multiple label domains into the given
parsing network. The proposed SST scheme can be eas-
ily applied to two common scenarios, i.e., universal pars-
ing and dedicated parsing. Similar to [12, 20], universal
parsing aims to enforce the parsing network to learn homo-
geneous human representation from multiple label domains
with the help of the SST scheme in the training phase. It
achieves different levels of human parsing prediction for an
arbitrary image in the inference phase , as illustrated in Fig-
ure 1-(b). In contrast, dedicated parsing aims to optimize
a parsing network for a specific label domain by employ-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19424
Figure 1. (a) The Motivation of SST: 1) Intra-Domain Hierarchical Semantic Structure (Upper): The spatial correlated two parts are
connected by a white line; 2) Cross-Domain Semantic Mapping and Consistency (Lower): We regularize the semantic consistency (orange
line) between original category representations (solid line) and mapped ones (dot line) for two datasets (green for dataset-1 and blue for
dataset-2). (b) The Training and Inference Phase of Universal Parsing. (c) The Training and Inference Phase of Dedicated Parsing.
ing the SST scheme to distill the semantic knowledge from
other label domains in the training phase, and finally real-
izes more accurate prediction for the specific label domain
in the inference phase, as shown in Fig. 1-(c).
The proposed SST has several appealing benefits: (1)
Generality. It is a general training scheme that effectively
incorporates prior knowledge of human body parts into the
representation learning process of a given parsing network.
It enables the integration of both intra-domain spatial cor-
relations and inter-domain semantic associations, making it
suitable for training both universal and dedicated parsing
networks. (2) Scalability. It is a scalable framework that
enables the addition of new datasets for more effective train-
ing. In practice, universal parsing enables the parsing net-
work to acquire a more robust human representation by ex-
panding the label domain pool, while dedicated parsing fa-
cilitates the transfer of rich semantic knowledge from more
datasets to enhance the parsing network for a specific label
domain. (3) Simplicity. All modules in SST are plug-and-
play ones that are only used for auxiliary training and can
be dropped out during inference, eliminating unnecessary
model complexity in real-world applications.
The main contributions are summarized as follows:
1) We propose a novel training paradigm called Scalable
Semantic Transfer (SST) that effectively leverages the
benefits of data from different label domains to train
powerful universal and dedicated parsing networks.
2) We introduce three plug-and-play modules in SST that
incorporate prior knowledge of human body parts into
the training process of a given parsing network, which
can be removed during inference.
3) Extensive experiments demonstrate the effectiveness
of SST on both universal and dedicated parsing tasks,
achieving impressive results on three human parsing
datasets compared with the counterparts. |
Zhang_Backdoor_Defense_via_Deconfounded_Representation_Learning_CVPR_2023 | Abstract
Deep neural networks (DNNs) are recently shown to be
vulnerable to backdoor attacks, where attackers embed hid-
den backdoors in the DNN model by injecting a few poi-
soned examples into the training dataset. While exten-
sive efforts have been made to detect and remove back-
doors from backdoored DNNs, it is still not clear whether
a backdoor-free clean model can be directly obtained from
poisoned datasets. In this paper, we first construct a causal
graph to model the generation process of poisoned data
and find that the backdoor attack acts as the confounder,
which brings spurious associations between the input im-
ages and target labels, making the model predictions less
reliable. Inspired by the causal understanding, we pro-
pose the Causality-inspired Backdoor Defense (CBD), to
learn deconfounded representations for reliable classifi-
cation. Specifically, a backdoored model is intentionally
trained to capture the confounding effects. The other clean
model dedicates to capturing the desired causal effects by
minimizing the mutual information with the confounding
representations from the backdoored model and employ-
ing a sample-wise re-weighting scheme. Extensive exper-
iments on multiple benchmark datasets against 6 state-of-
the-art attacks verify that our proposed defense method is
effective in reducing backdoor threats while maintaining
high accuracy in predicting benign samples. Further anal-
ysis shows that CBD can also resist potential adaptive at-
tacks. The code is available at https://github.com/
zaixizhang/CBD .
| 1. Introduction
Recent studies have revealed that deep neural networks
(DNNs) are vulnerable to backdoor attacks [18, 38, 53],
*Qi Liu is the corresponding author.
Trigger Pattern
ImageDog Turtle Cat
Predicted LabelDNN Classifier
(a) Illustration of Backdoor Attack (b) Causal Graph X YBFigure 1. (a) A real example of the backdoor attack. The back-
doored DNN classifies the “turtle” image with a trigger pattern
as the target label “dog”. (b) The causal graph represents the
causalities among variables: Xas the input image, Yas the la-
bel, and Bas the backdoor attack. Besides the causal effect of X
onY(X→Y), the backdoor attack can attach trigger patterns
to images ( B→X), and change the labels to the targeted label
(B→Y). Therefore, as a confounder , the backdoor attack B
opens a spurious path between XandY(X←B→Y).
where attackers inject stealthy backdoors into DNNs by poi-
soning a few training data. Specifically, backdoor attack-
ers attach the backdoor trigger ( i.e.,a particular pattern) to
some benign training data and change their labels to the
attacker-designated target label. The correlations between
the trigger pattern and target label will be learned by DNNs
during training. In the inference process, the backdoored
model behaves normally on benign data while its predic-
tion will be maliciously altered when the backdoor is ac-
tivated. The risk of backdoor attacks hinders the applica-
tions of DNNs to some safety-critical areas such as auto-
matic driving [38] and healthcare systems [14].
On the contrary, human cognitive systems are known to
be immune to input perturbations such as stealthy trigger
patterns induced by backdoor attacks [17]. This is because
humans are more sensitive to causal relations than the sta-
tistical associations of nuisance factors [29,60]. In contrast,
deep learning models that are trained to fit the poisoned
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12228
datasets can hardly distinguish the causal relations and the
statistical associations brought by backdoor attacks. Based
on causal reasoning, we can identify causal relation [50,52]
and build robust deep learning models [70, 71]. Therefore,
it is essential to leverage causal reasoning to analyze and
mitigate the threats of backdoor attacks.
In this paper, we focus on the image classification tasks
and aim to train backdoor-free models on poisoned datasets
without extra clean data. We first construct a causal graph
to model the generation process of backdoor data where
nuisance factors ( i.e.,backdoor trigger patterns) are consid-
ered. With the assistance of the causal graph, we find that
the backdoor attack acts as the confounder and opens a spu-
rious path between the input image and the predicted label
(Figure 1). If DNNs have learned the correlation of such a
spurious path, their predictions will be changed to the target
labels when the trigger is attached.
Motivated by our causal insight, we propose Causality-
inspired Backdoor Defense (CBD) to learn deconfounded
representations for classification. As the backdoor attack
is stealthy and hardly measurable, we cannot directly block
the backdoor path by the backdoor adjustment from causal
inference [52]. Inspired by recent advances in disentan-
gled representation learning [20, 36, 66], we instead aim
to learn deconfounded representations that only preserve
the causality-related information. Specifically in CBD, two
DNNs are trained, which focus on the spurious correlations
and the causal effects respectively. The first DNN is de-
signed to intentionally capture the backdoor correlations
with an early stop strategy. The other clean model is then
trained to be independent of the first model in the hidden
space by minimizing mutual information. The information
bottleneck strategy and sample-wise re-weighting scheme
are also employed to help the clean model capture the causal
effects while relinquishing the confounding factors. After
training, only the clean model is used for downstream clas-
sification tasks. In summary, our contributions are:
• From a causal perspective, we find the backdoor at-
tack acts as the confounder that causes spurious corre-
lations between the input images and the target label.
• With the causal insight, we propose a Causality-
inspired Backdoor Defense (CBD), which learns de-
confounded representations to mitigate the threat of
poisoning-based backdoor attacks.
• Extensive experiments with 6 representative backdoor
attacks are conducted. The models trained using CBD
are of almost the same clean accuracy as they were
directly trained on clean data and the average backdoor
attack success rates are reduced to around 1 %, which
verifies the effectiveness of CBD.
• We explore one potential adaptive attack against CBD,which tries to make the backdoor attack stealthier by
adversarial training. Experiments show that CBD is
robust and resistant to such an adaptive attack.
|
Zhai_Feature_Representation_Learning_With_Adaptive_Displacement_Generation_and_Transformer_Fusion_CVPR_2023 | Abstract
Micro-expressions are spontaneous, rapid and subtle fa-
cial movements that can neither be forged nor suppressed.
They are very important nonverbal communication clues,
but are transient and of low intensity thus difficult to rec-
ognize. Recently deep learning based methods have been
developed for micro-expression (ME) recognition using fea-
ture extraction and fusion techniques, however, targeted
feature learning and efficient feature fusion still lack fur-
ther study according to the ME characteristics. To address
these issues, we propose a novel framework Feature Repre-
sentation Learning with adaptive Displacement Generation
and Transformer fusion (FRL-DGT), in which a convolu-
tional Displacement Generation Module (DGM) with self-
supervised learning is used to extract dynamic features from
onset/apex frames targeted to the subsequent ME recogni-
tion task, and a well-designed Transformer Fusion mecha-
nism composed of three Transformer-based fusion modules
(local, global fusions based on AU regions and full-face fu-
sion) is applied to extract the multi-level informative fea-
tures after DGM for the final ME prediction. The extensive
experiments with solid leave-one-subject-out (LOSO) eval-
uation results have demonstrated the superiority of our pro-
posed FRL-DGT to state-of-the-art methods.
| 1. Introduction
As a subtle and short-lasting change, micro-expression
(ME) is produced by unconscious contractions of facial
muscles and lasts only 1/25th to 1/5th of a second, as illus-
trated in Figure 1, revealing a person’s true emotions under-
neath the disguise [8,35]. The demands for ME recognition
technology are becoming more and more extensive [2, 18],
*Corresponding author.
Figure 1. A video sequence depicting the order of which onset,
apex and offset frames occur. Sample frames are from a “surprise”
sequence in CASME II. Our goal is to design a novel feature rep-
resentation learning method based on an onset-apex frame pair for
facial ME recognition. (Images from CASME II ©Xiaolan Fu)
including multimedia entertainment, film-making, human-
computer interaction, affective computing, business nego-
tiation, teaching and learning, etc. Since MEs have invol-
untary muscle movements with short duration and low in-
tensity in nature, the research of ME is attractive but diffi-
cult [1, 22]. Therefore, it is crucial and desired to extract
robust feature representations to conduct ME analyses.
A lot of feature representation methods are already avail-
able including those relying heavily on hand-crafted fea-
tures with expert experiences [4, 11, 25] and deep learning
techniques [33, 36, 40]. However, the performance of deep
learning networks is still restricted for ME classification,
mainly due to the complexity of ME and insufficient train-
ing data [28, 47]. Deep learning methods can automatically
extract optimal features and offer an end-to-end classifica-
tion, but in the existing solutions, dynamic feature extrac-
tion is only taken as a data preprocessing strategy. It is not
integrated with the subsequent neural network, thus failing
to adapt the generated dynamic features to a specific train-
ing task, leading to redundancy or missing features. Such
shortcoming motivates us to design a dynamic feature ex-
tractor to adapt the subsequent ME recognition task.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22086
In this paper, we propose a novel end-to-end feature rep-
resentation learning framework named FRL-DGT, which
is constructed with a well-designed adaptive Displacement
Generation Module (DGM) and a subsequent Transformer
Fusion mechanism composed of the Transformer-based lo-
cal, global, and full-face fusion modules, as illustrated in
Figure 2, for ME feature learning to model global informa-
tion while focusing on local features. Our FRL-DGT only
requires a pair of images, i.e., the onset and the apex frames,
which are extracted from the frame sequence.
Unlike the previous methods which extract optical flow
or dynamic imaging features, our DGM and corresponding
loss functions are designed to generate the displacement be-
tween expression frames, using a convolution module in-
stead of the traditional techniques. The DGM is involved
in training with the subsequent ME classification module,
and therefore its parameters can be tuned based on the feed-
back from classification loss to generate more targeted dy-
namic features adaptively. We shall emphasize that the la-
beled training data for ME classification is very limited and
therefore the supervised data for our DGM is insufficient.
To handle this case, we resort to a self-supervised learning
strategy and sample sufficient additional random pairs of
image sequence as the extra training data for the DGM, so
that it is able to fully extract the necessary dynamic features
adaptively for the subsequent ME recognition task.
Regarding fusing the dynamic features extracted from
DGM, we first adopt the AU (Action Unit) region partition-
ing method from FACS (Facial Action Coding System) [49]
to get 9 AUs, and then crop the frames and their displace-
ments into blocks based on the 9 AUs and the full-face re-
gion as input to the Transformer Fusion. We argue that the
lower layers in the Transformer Fusion should encode and
fuse different AU region features in a more targeted way,
while the higher layers can classify MEs based on the infor-
mation of all AUs. We propose a novel fusion layer with at-
tentions as a linear fusion before attention mechanism [45],
aiming at a more efficient and accurate integration of the
embedding vectors. The fusion layers are interleaved with
Transformer’s basic blocks to form a new multi-level fusion
module for classification to ensure it to better learn global
information and long-term dependencies of ME.
To summarize, our main contributions are as follows:
• We propose a novel end-to-end network FRL-DGT
which fully explores AU regions from onset-apex pair
and the displacement between them to extract compre-
hensive features via Transformer Fusion mechanism
with the Transformer-based local, global, and full-face
feature fusions for ME recognition.
• Our DGM is well-trained with self-supervised learn-
ing, and makes full use of the subsequent classifica-
tion supervision information in the training phase, sothat the trained DGM model is able to generate more
targeted ME dynamic features adaptively.
• We present a novel fusion layer to exploit the linear
fusion before attention mechanism in Transformer for
fusing the embedding features at both local and global
levels with simplified computation.
• We demonstrate the effectiveness of each module with
ablation study and the outperformance of the proposed
FRL-DGT to the SOTA methods with extensive exper-
iments on three popular ME benchmark datasets.
|
Zhang_Ref-NPR_Reference-Based_Non-Photorealistic_Radiance_Fields_for_Controllable_Scene_Stylization_CVPR_2023 | Abstract
Current 3D scene stylization methods transfer textures
and colors as styles using arbitrary style references, lack-
ing meaningful semantic correspondences. We introduce
Reference-Based Non-Photorealistic Radiance Fields (Ref-
NPR) to address this limitation. This controllable method
stylizes a 3D scene using radiance fields with a single styl-
ized 2D view as a reference. We propose a ray registra-
tion process based on the stylized reference view to ob-
tain pseudo-ray supervision in novel views. Then we ex-
ploit semantic correspondences in content images to fill oc-
cluded regions with perceptually similar styles, resulting in
non-photorealistic and continuous novel view sequences.
Our experimental results demonstrate that Ref-NPR out-
performs existing scene and video stylization methods re-
garding visual quality and semantic correspondence. The
code and data are publicly available on the project page at
https://ref-npr.github.io .
| 1. Introduction
In the past decade, there has been a rising demand for
stylizing and editing 3D scenes and objects in various fields,
including augmented reality, game scene design, and digitalartwork. Traditionally, professionals achieve these tasks by
creating 2D reference images and converting them into styl-
ized 3D textures. However, establishing direct cross-modal
correspondence is challenging and often requires significant
time and effort to obtain stylized texture results similar to
the 2D reference schematics.
A critical challenge in the 3D stylization problem is to
ensure that stylized results are perceptually similar to the
given style reference. Benefiting from radiance fields [2,
10, 28, 29, 37, 46], recent novel-view stylization meth-
ods [5,8,15,16,30,45] greatly facilitated style transfer from
an arbitrary 2D style reference to 3D implicit representa-
tions. However, these methods do not provide explicit con-
trol over the generated results, making it challenging to
specify the regions where certain styles should be applied
and ensure the visual quality of the results. On the other
hand, reference-based video stylization methods allow for
the controllable generation of stylized novel views with bet-
ter semantic correspondence between content and style ref-
erence, as demonstrated in works like [17, 38]. However,
these methods may diverge from the desired style when styl-
izing a frame sequence with unseen content, even with the
assistance of stylized keyframes.
To address the aforementioned limitations, we propose
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4242
a new paradigm for stylizing 3D scenes using a single
stylized reference view. Our approach, called Reference-
Based Non-Photorealistic Radiance Fields (Ref-NPR), is a
controllable scene stylization method that takes advantage
of volume rendering to maintain cross-view consistency
and establish semantic correspondence in transferring style
across the entire scene.
Ref-NPR utilizes stylized views from radiance fields
as references instead of arbitrary style images to achieve
both flexible controllability and multi-view consistency.
A reference-based ray registration process is designed to
project the 2D style reference into 3D space by utilizing the
depth rendering of the radiance field. This process provides
pseudo-ray supervision to maintain geometric and percep-
tual consistency between stylized novel views and the styl-
ized reference view. To obtain semantic style correspon-
dence in occluded regions, Ref-NPR performs template-
based feature matching, which uses high-level semantic fea-
tures as implicit style supervision. The correspondence in
the content domain is utilized to select style features in the
given style reference, which are then used to transfer style
globally, especially in occluded regions. By doing so, Ref-
NPR generates the entire stylized 3D scene from a single
stylized reference view.
Ref-NPR produces visually appealing stylized views that
maintain both geometric and semantic consistency with the
given style reference, as presented in Fig. 1. The generated
stylized views are perceptually consistent with the refer-
ence while also exhibiting high visual quality across various
datasets. We have demonstrated that Ref-NPR, when using
the same stylized view as reference, outperforms state-of-
the-art scene stylization methods [30, 45] both qualitatively
and quantitatively.
In summary, our paper makes three contributions.
Firstly, we introduce a new paradigm for stylizing 3D
scenes that allows for greater controllability through the
use of a stylized reference view. Secondly, we propose a
novel approach called Ref-NPR, consisting of a reference-
based ray registration process and a template-based feature
matching scheme to achieve geometrically and percep-
tually consistent stylizations. Finally, our experiments
demonstrate that Ref-NPR outperforms state-of-the-art
scene stylization methods such as ARF and SNeRF both
qualitatively and quantitatively. More comprehensive
results and a demo video can be found in the supplementary
material and on our project page.
|
Zhang_Real-Time_Controllable_Denoising_for_Image_and_Video_CVPR_2023 | Abstract
Controllable image denoising aims to generate clean
samples with human perceptual priors and balance sharp-
ness and smoothness. In traditional filter-based denoising
methods, this can be easily achieved by adjusting the fil-
tering strength. However, for NN (Neural Network)-based
models, adjusting the final denoising strength requires per-
forming network inference each time, making it almost im-
possible for real-time user interaction. In this paper, we in-
troduce Real-time Controllable Denoising (RCD), the first
deep image and video denoising pipeline that provides a
fully controllable user interface to edit arbitrary denois-
ing levels in real-time with only one-time network inference.
Unlike existing controllable denoising methods that require
multiple denoisers and training stages, RCD replaces the
last output layer (which usually outputs a single noise map)
of an existing CNN-based model with a lightweight module
that outputs multiple noise maps. We propose a novel Noise
Decorrelation process to enforce the orthogonality of the
noise feature maps, allowing arbitrary noise level control
through noise map interpolation. This process is network-
free and does not require network inference. Our experi-
ments show that RCD can enable real-time editable image
and video denoising for various existing heavy-weight mod-
els without sacrificing their original performance.
| 1. Introduction
Image and video denoising are fundamental problems
in computational photography and computer vision. With
the development of deep neural networks [12, 26, 49, 59],
model-based denoising methods have achieved tremendous
success in generating clean images and videos with superior
denoising scores [4,55,57]. However, it should be noted that
the improvement in reconstruction accuracy (e.g., PSNR,
SSIM) is not always accompanied by an improvement in
visual quality, which is known as the Perception-Distortion
trade-off [6]. In traditional denoising approaches, we can
easily adjust the denoising level by tuning related control
*Equal contribution.
Perception
DistortionReal-time tuning for user preference
GT
Initial Results
T uning
A
BC
D
E
Intensity PerceptionNoisy SampleFigure 1. Real-time controllable denoising allows users further
tuning the restored results to achieve Perception-Distortion trade-
off. A-B: tuning with changing denoising intensity. C-E: tuning
without changing denoising intensity.
parameters and deriving our preferred visual results. How-
ever, for typical deep network methods, we can only restore
the degraded image or video to a fixed output with a prede-
termined restoration level.
In recent years, several modulation methods have been
proposed to generate continuous restoration effects be-
tween two pre-defined denoising levels. These methods can
be categorized into two kinds: interpolation-based meth-
ods [17,24,50,51], which use deep feature interpolation lay-
ers, and condition-network-based methods, which import an
extra condition network for denoising control [9, 25, 39].
Essentially, both types of methods are designed based on
the observation that the outputs of the network change con-
tinuously with the modulation of features/filters. This ob-
servation enables deep denoising control, but it also intro-
duces several limitations. First, there is a lack of explain-
ability , as the relationship between the control parameters
(how to modulate features) and the control operation (how
the network outputs are changed) is unclear [24]. This in-
dicates that black-box operators (network layers) must be
used to encode them. Second, the use of control parame-
ters as network inputs requires entire network propagation
each time control parameters change, resulting in a lack of
efficiency . Lastly, current modulation methods often re-
quire an explicit degradation level during training, which
is hard to obtain for real-world samples . As a result, cur-
rent controllable denoising methods only focus on synthetic
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14028
......Noisy
Image
NoisesParam 0
Denoising
NetworkNoisy
Image
......
10 20 30 40 50Editable
Noises
Clean
Image 0Contr ol
NetworksParam 0
Param 1
Param 2
Clean
Image 0Clean
Image 1Clean
Image 2Black-box operator:
"Translating" control parameters
to networksContr ol Parameters
Contr ol Parameters
Conventional Contr ollable DenoisingParam 0
Param 1
Param 2Param 0
Param 1
Param 2
RCD (ours)Noise LevelFigure 2. Comparison of pipelines between conventional control-
lable denoising and our RCD. RCD achieves real-time noise con-
trol by manipulating editable noises directly.
noise benchmarks. Furthermore, both interpolation-based
and condition-network-based methods have their own draw-
backs. Interpolation-based methods often require multi-
ple training stages, including pretraining two basic models
(start level and end level). On the other hand, condition-
network-based methods are strenuous to jointly optimize
the base network and the condition network.
In this paper, we research on the problem: Can we
achieve real-time controllable denoising that abandons the
auxiliary network and requires no network forward propa-
gation for changing restoration effects at test time?
Towards this goal, we propose Real-time Control-
lable Denoising method (RCD), a lightweight pipeline for
enabling rapid denoising control to achieve Perception-
Distortion Balance (See Fig. 1). Our RCD can be plugged
into any noise-generate-based restoration methods [11, 46,
54, 55] with just a few additional calculations. Specifi-
cally, we replace the last layer of an existing denoising
network (which usually outputs a single noise map) with a
lightweight module that generates multiple noise maps with
different noise levels. We utilize a novel Noise Decorrela-
tion process to enforce the orthogonality of the noise distri-
bution of these noise maps during training. As a result, we
can attain arbitrary denoising effects by simple linear inter-
polation of these noise maps. Since this process does not
require network inference, it makes real-time user interac-
tion possible even for heavy denoising networks.
Fig. 2 illustrates the fundamental differences between
our RCD approach and conventional controllable denoising
methods. In contrast to traditional methods that rely on con-
trol networks, the RCD pipeline generates editable noises of
varying intensities/levels, providing explicit control by ex-
ternal parameters and enabling network-free, real-time de-
noising editing. Real-time editing capabilities offered by
RCD create new opportunities for numerous applications
that were previously impossible using conventional tech-
niques, such as online video denoising editing, even during
playback (e.g., mobile phone camera video quality tuning
for ISP tuning engineers), as well as deploying controllable
denoising on edge devices and embedded systems. Since
the editing stage of RCD only involves image interpolation,users can edit their desired results on low-performance de-
vices without the need for GPUs/DSPs.
Moreover, unlike previous methods that only support
changing noise levels, RCD allows users to adjust denois-
ing results at a specific noise level by providing a new in-
terface to modify the noise generation strategy. RCD is
also the first validated method for controllable denoising on
real-world benchmarks. It is noteworthy that existing con-
trollable methods typically require training data with fixed-
level noise to establish their maximum and minimum noise
levels, which makes them unsuitable for most real-world
benchmarks comprising data with varying and unbalanced
noise levels.
Our main contributions can be summarized as follows:
• We propose RCD, a controllable denoising pipeline
that firstly supports real-time denoising control (>
2000×speedup compared to conventional controllable
methods) and larger control capacity (more than just
intensity) without multiple training stages [24] and
auxiliary networks [50].
• RCD is the first method supporting controllable de-
noising on real-world benchmarks.
• We propose a general Noise Decorrelation technique
to estimate editable noises.
• We achieve comparable or better results on widely-
used real/synthetic image-denoising and video-
denoising datasets with minimal additional computa-
tional cost.
|
Yan_A_Unified_HDR_Imaging_Method_With_Pixel_and_Patch_Level_CVPR_2023 | Abstract
Mapping Low Dynamic Range (LDR) images with differ-
ent exposures to High Dynamic Range (HDR) remains non-
trivial and challenging on dynamic scenes due to ghosting
caused by object motion or camera jitting. With the success
of Deep Neural Networks (DNNs), several DNNs-based
methods have been proposed to alleviate ghosting, they can-
not generate approving results when motion and saturation
occur. To generate visually pleasing HDR images in various
cases, we propose a hybrid HDR deghosting network, called
HyHDRNet, to learn the complicated relationship between
reference and non-reference images. The proposed HyH-
DRNet consists of a content alignment subnetwork and a
Transformer-based fusion subnetwork. Specifically, to ef-
fectively avoid ghosting from the source, the content align-
ment subnetwork uses patch aggregation and ghost atten-
tion to integrate similar content from other non-reference
images with patch level and suppress undesired compo-
nents with pixel level. To achieve mutual guidance between
patch-level and pixel-level, we leverage a gating module
to sufficiently swap useful information both in ghosted and
saturated regions. Furthermore, to obtain a high-quality
HDR image, the Transformer-based fusion subnetwork uses
a Residual Deformable Transformer Block (RDTB) to adap-
tively merge information for different exposed regions. We
examined the proposed method on four widely used public
HDR image deghosting datasets. Experiments demonstrate
that HyHDRNet outperforms state-of-the-art methods both
quantitatively and qualitatively, achieving appealing HDR
visualization with unified textures and colors.
| 1. Introduction
Natural scenes cover a very broad range of illumina-
tion, but standard digital camera sensors can only measure
*†The first three authors contributed equally to this work. This
work was partially supported by NSFC (U19B2037, 61901384), Natu-
ral Science Basic Research Program of Shaanxi (2021JCW-03, 2023-JC-
QN-0685), the Fundamental Research Funds for the Central Universi-
ties (D5000220444), and National Engineering Laboratory for Integrated
Aero-Space-Ground-Ocean Big Data Application Technology. Corre-
sponding author: Jinqiu Sun.
Figure 1. Our approach produces high-quality HDR images, lever-
aging both patch-wise aggregation and pixel-wise ghost atten-
tion. The two modules provide complementary visual information:
patch aggregation recovers patch-level content of the complex dis-
torted regions and ghost attention provides pixel-level alignment.
a limited dynamic range. Images captured by cameras of-
ten have saturated or under-exposed regions, which lead to
terrible visual effects due to severely missing details. High
Dynamic Range (HDR) imaging has been developed to ad-
dress these limitations, and it can display richer details. A
common way of HDR imaging is to fuse a series of differ-
ently exposed Low Dynamic Range (LDR) images. It can
recover a high-quality HDR image when both the scene and
the camera are static, however, it suffers from ghosting arti-
facts on dynamic objects or hand-held camera scenarios.
Several methods have been proposed to alleviate these
problems, including alignment-based methods [1, 7, 20],
rejection-based methods [3, 8, 16, 18, 33] and patch-based
methods [5, 13, 19]. The alignment-based methods employ
global ( e.g., homographies) or non-rigid alignment ( e.g.,
optical flow) to rectify the content of motion regions, but
they are error-prone and cause ghosts due to saturation and
occlusion. The rejection-based methods attempt to remove
the motion components from the exposed images and re-
place them with the content of the reference image. Al-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22211
though these methods achieve good quality for static scenes,
they discard the misalignment regions, which causes insuf-
ficient content in moving regions. The patch-based meth-
ods utilize patch-level alignment to transfer and fuse simi-
lar content. While these patch-based methods achieve better
performance, they suffer from high computational costs.
With the rise of Deep Neural Networks (DNNs), many
works directly learn the complicated mapping between
LDR and HDR using a CNN. In general, these models fol-
low the paradigm of alignment before fusion. The non-end-
to-end DNN-based methods [6, 22] first align LDR images
with optical flow or homographies, and then fuse aligned
images to generate HDR images. The alignment approaches
are error-prone and inevitably cause ghosting artifacts when
complex foreground motions occur. Based on the attention-
based end-to-end method AHDRNet [25, 26] which per-
forms spatial attention to suppress motion and saturation,
several methods [2,12,27,30,31] have been proposed to re-
move ghosting artifacts. The spatial attention module pro-
duces attention maps and element-wise multiplies with non-
reference features, thus the model removes motion or satu-
rated regions and highlights more informative regions.
However, the success of these methods relies on no-
ticeable variations between reference and non-reference
frames. These methods perform well in the marginal areas,
even if there is a misalignment in the input images. Unluck-
ily, spatial attention produces unsatisfactory results when
motion and saturation are present simultaneously (see Fig-
ure 1). The reason can be attributed to that spatial attention
uses element-wise multiplication, which only considers the
features in the same positions. For example, in the reference
frame of Figure 1 ( i.e., LDR with EV=0), the information
in the over-exposed regions is unavailable, spatial attention
can only rely on the non-saturated information of the same
position ( i.e., moving regions) in the non-reference frame
due to element-wise multiplication. Therefore, recovering
the content of the moving and saturated regions is challeng-
ing. Finally, this limitation of spatial attention causes obvi-
ous ghosting artifacts in these complex cases.
To generate high-quality HDR images in various cases,
we propose a Hybrid HDR deghosting Network, named Hy-
HDRNet, to establish the complicated alignment and fusion
relationship between reference and non-reference images.
The proposed HyHDRNet comprises a content alignment
subnetwork and a Transformer-based fusion subnetwork.
For the content alignment subnetwork, inspired by patch-
based HDR imaging methods [5, 19], we propose a novel
Patch Aggregation (PA) module, which calculates the sim-
ilarity map between different patches and selectively ag-
gregates useful information from non-reference LDR im-
ages, to remove ghosts and generate content of saturation
and misalignment. While the traditional patch-based HDR
imaging methods have excellent performance but have thefollowing drawbacks: 1) low patch utilization ratio caused
by reusing the same patches, which leads to insufficient
content during fusion, 2) structural destruction of images
when transfering patches, 3) high computational complex-
ity in full resolution. To this end, our Patch Aggregation
mechanism 1) aggregates multiple patches which improves
the patch utilization ratio 2) aggregates patches instead of
exchanging them to maintain structural information, 3) cal-
culates a similarity map within a window to reduce compu-
tational complexity. These advantages promote the network
to remedy the content of saturated and motion regions(See
Figure 9), other patch-based HDR imaging methods cannot
achieve this goal. In a word, our PA module (patch level)
discovers and aggregates similar patches within a large re-
ceptive field according to the similarity map, thus it can re-
cover the content inside the distorted regions. To further
avoid ghosting, we also employ a ghost attention module
(pixel level) as a complementary branch for the PA module,
and propose a gating module to achieve mutual guidance of
these two modules in the content alignment subnetwork. In
addition, unlike previous methods using DNN structure in
the feature fusion stage which has static weights and only
merges the local information, we propose a Transformer-
based fusion subnetwork that uses Residual Deformable
Transformer Block (RDTB) to model long-range dependen-
cies of different regions. The RDTB can dynamically ad-
just weights and adaptively merge information in different
exposure regions. The experiments demonstrate that our
proposed method achieves state-of-the-art performance on
public datasets. The main contributions of our work can be
summarized as follows:
• We propose a hybrid HDR deghosting network to ef-
fectively integrate the advantage of patch aggregation
and ghost attention using a gating strategy.
• We first introduce the patch aggregation module which
selectively aggregates useful information from non-
reference LDR images to remove ghosts and generate
content for saturation.
• A novel residual deformable Transformer block is pro-
posed, which can adaptively fuse a large range of in-
formation to generate high-quality HDR images.
• We carry out both qualitative and quantitative experi-
ments, which show that our method achieves state-of-
the-art results over four public benchmarks.
|
Zhang_SadTalker_Learning_Realistic_3D_Motion_Coefficients_for_Stylized_Audio-Driven_Single_CVPR_2023 | Abstract
Generating talking head videos through a face image
and a piece of speech audio still contains many challenges.
i.e., unnatural head movement, distorted expression, and
identity modification. We argue that these issues are mainly
caused by learning from the coupled 2D motion fields. On
the other hand, explicitly using 3D information also suffers
problems of stiff expression and incoherent video. We present
SadTalker, which generates 3D motion coefficients (head
pose, expression) of the 3DMM from audio and implicitly
modulates a novel 3D-aware face render for talking head
generation. To learn the realistic motion coefficients, we
*Equal Contribution
†Corresponding Authorexplicitly model the connections between audio and differ-
ent types of motion coefficients individually. Precisely, we
present ExpNet to learn the accurate facial expression from
audio by distilling both coefficients and 3D-rendered faces.
As for the head pose, we design PoseVAE via a conditional
VAE to synthesize head motion in different styles. Finally,
the generated 3D motion coefficients are mapped to the un-
supervised 3D keypoints space of the proposed face render
to synthesize the final video. We conducted extensive experi-
ments to demonstrate the superiority of our method in terms
of motion and video quality.1
1The code and demo videos are available at https://sadtalker.
github.io .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8652
| 1. Introduction
Animating a static portrait image with speech audio is
a challenging task and has many important applications
in the fields of digital human creation, video conferences,
etc. Previous works mainly focus on generating lip mo-
tion [2, 3, 28, 29, 49] since it has a strong connection with
speech. Recent works also aim to generate a realistic talking
face video containing other related motions, e.g., head pose.
Their methods mainly introduce 2D motion fields by land-
marks [50] and latent warping [37, 38]. However, the quality
of the generated videos is still unnatural and restricted by
the preference pose [16, 49], month blur [28], identity modi-
fication [37, 38], and distorted face [37, 38, 47].
Generating a natural-looking talking head video contains
many challenges since the connections between audio and
different motions are different. i.e., the lip movement has
the strongest connection with audio, but audio can be talked
via different head poses and eye blink. Thus, previous facial
landmark-based methods [2, 50] and 2D flow-based audio to
expression networks [37, 38] may generate the distorted face
since the head motion and expression are not fully disentan-
gled in their representation. Another popular type of method
is the latent-based face animation [3, 16, 28, 49]. Their meth-
ods mainly focus on the specific kind of motions in talking
face animation and struggle to synthesize high-quality video.
Our observation is that the 3D facial model contains a highly
decoupled representation and can be used to learn each type
of motion individually. Although a similar observation has
been discussed in [47], their methods also generate inaccu-
rate expressions and unnatural motion sequences.
From the above observation, we propose SadTalker, a
Stylized Audio- Driven Talk ing-head video generation sys-
tem through implicit 3D coefficient modulation. To achieve
this goal, we consider the motion coefficients of the 3DMM
as the intermediate representation and divide our task into
two major components. On the one hand, we aim to generate
the realistic motion coefficients ( e.g., head pose, lip motion,
and eye blink) from audio and learn each motion individu-
ally to reduce the uncertainty. For expression, we design a
novel audio to expression coefficient network by distilling
the coefficients from the lip motion only coefficients from
[28] and the perceptual losses (lip-reading loss [1], facial
landmark loss) on the reconstructed rendered 3D face [5].
For the stylized head pose, a conditional V AE [6] is used
to model the diversity and life-like head motion by learning
the residual of the given pose. After generating the realistic
3DMM coefficients, we drive the source image through a
novel 3D-aware face render. Inspired by face-vid2vid [40],
we learn a mapping between the explicit 3DMM coefficients
and the domain of the unsupervised 3D keypoint. Then, the
warping fields are generated through the unsupervised 3D
keypoints of source and driving and it warps the reference im-
age to generate the final videos. We train each sub-networkof expression generation, head poses generation and face
renderer individually and our system can be inferred in an
end-to-end style. As for the experiments, several metrics
show the advantage of our method in terms of video and
motion methods.
The main contribution of this paper can be summarized
as:
•We present SadTalker, a novel system for a stylized
audio-driven single image talking face animation using
the generated realistic 3D motion coefficients.
•To learn the realistic 3D motion coefficient of the
3DMM model from audio, ExpNet and PoseV AE are
presented individually.
•A novel semantic-disentangled and 3D-aware face ren-
der is proposed to produce a realistic talking head video.
•Experiments show that our method achieves state-of-
the-art performance in terms of motion synchronization
and video quality.
|
Xu_Toward_RAW_Object_Detection_A_New_Benchmark_and_a_New_CVPR_2023 | Abstract
In many computer vision applications (e.g., robotics and
autonomous driving), high dynamic range (HDR) data isnecessary for object detection algorithms to handle a vari-ety of lighting conditions, such as strong glare. In this pa-
per , we aim to achieve object detection on RAW sensor data,
which naturally saves the HDR information from image sen-
sors without extra equipment costs. We build a novel RAWsensor dataset, named ROD, for Deep Neural Networks
(DNNs)-based object detection algorithms to be applied toHDR data. The ROD dataset contains a large amount of an-notated instances of day and night driving scenes in 24-bit
dynamic range. Based on the dataset, we first investigatethe impact of dynamic range for DNNs-based detectors and
demonstrate the importance of dynamic range adjustmentfor detection on RAW sensor data. Then, we propose a sim-ple and effective adjustment method for object detection onHDR RAW sensor data, which is image adaptive and jointly
optimized with the downstream detector in an end-to-endscheme. Extensive experiments demonstrate that the per-formance of detection on RAW sensor data is significantly
superior to standard dynamic range (SDR) data in different
situations. Moreover , we analyze the influence of texture in-
formation and pixel distribution of input data on the perfor-mance of the DNNs-based detector . Code and dataset will
be available at https://gitee.com//mindspore/
models/tree/master/research/cv/RAOD .
| 1. Introduction
Real-world scenes are complex and of high dynamic
range (HDR), especially in extreme situations like the directlight of other vehicles. In many computer vision applica-
tions, such as autonomous driving and robotics, HDR data
is important and necessary for making safety-critical deci-sions [ 34] since it extends the captured luminance. For in-
stance, images may easily get over-exposed in brighter areas
*Equal contribution.†Corresponding author. This work was done when
Ruikang Xu was a research intern at Huawei Noah’s Ark Lab.on standard dynamic range (SDR) images, but there may be
important information in corresponding raw regions. To ob-
tain the HDR data, recent works use additional cameras andeven unconventional sensors, such as neuromorphic cam-
eras and infrared cameras [ 13,25], which inevitably brings
extra costs. In this paper, we make the first effort to achieve
object detection on RAW sensor data, which naturally stores
HDR information without any additional burden.
RAW sensor data is generated from the image sensor,
and is the input data of the image signal processor (ISP),
rendering SDR images suitable for human perception andunderstanding. RAW sensor data is naturally HDR and saveall information from image sensors. However, datasets from
the RAW domain are difficult to collect, store and annotate.
And, to the best of our knowledge, there is no large-scaleHDR RAW sensor dataset available for object detection.Existing RAW sensor datasets are no more than 14-bit andnot large enough for practical applications. For instance,PASCALRAW dataset [ 27] is of only 12-bit, which is not
wide enough to handle complex lighting conditions. To fill
this gap, we create a novel RAW sensor dataset for objectdetection on the driving scene, named as ROD, which con-sists of 25k annotated RAW sensor data in a 24-bit dynamicrange on day and night scenarios.
On the other hand, most Deep Neural Networks (DNNs)-
based object detection algorithms are designed for the com-
mon SDR data, which only records information in the 8-bitdynamic range. Hence, we first investigate the impact of dy-namic range for DNNs-based detection algorithms and ex-
perimentally find that directly applying these DNNs-baseddetectors on HDR RAW sensor data results in significant
performance degradation, and it gets worse when the dy-namic range increases. Then, we analyze the key compo-nent of the ISP system and demonstrate the importance ofdynamic range adjustment for RAW detection.
In this paper, we propose an adjustment method for the
effective detection on RAW sensor data, which is jointly op-timized with the downstream detection network in an end-
to-end scheme. Note that our proposed method is trained
together with the detector from scratch only using object an-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13384
notations as the supervision. To effectively exploit the HDR
information from RAW sensor data, we devise an image-
adaptive processing network to regulate RAW sensor datawith learnable transformation functions. Specifically, we
design two modules to adjust the dynamic range of RAW
sensor data by image-level and pixel-level information. In
addition, our proposed method is lightweight and computa-tionally efficient.
Extensive experimental results on the proposed ROD
dataset demonstrate that the performance of object detec-tion on RAW sensor data is significantly superior to de-tection on SDR data in different scenarios. Our methodalso outperforms recent state-of-the-art neural ISP meth-ods [ 23,41]. Comprehensive ablation experiments show
that our proposed method effectively improves the perfor-mance of DNNs-based object detection algorithms on HDR
RAW sensor data. Furthermore, we analyze the influence of
texture information and pixel distribution of the input datafor the performance of the downstream detection network.
In summary, the main contributions are as follows:•We build a novel RAW sensor dataset for object detec-
tion on HDR RAW sensor data, which contains 25k drivingscenes on both day and night scenarios.
•We propose a simple and effective adjustment method
for detection on HDR RAW sensor data, which is jointlyoptimized with the detector in an end-to-end manner.
•Extensive experiments demonstrate that object detec-
tion on HDR RAW sensor data significantly outperformsthat on SDR data in different situations. It also shows that
our method is effective and computationally efficient.
|
Yu_Mind_the_Label_Shift_of_Augmentation-Based_Graph_OOD_Generalization_CVPR_2023 | Abstract
Out-of-distribution (OOD) generalization is an impor-
tant issue for Graph Neural Networks (GNNs). Recent
works employ different graph editions to generate aug-
mented environments and learn an invariant GNN for gen-
eralization. However, the label shift usually occurs in aug-
mentation since graph structural edition inevitably alters
the graph label. This brings inconsistent predictive rela-
tionships among augmented environments, which is harm-
ful to generalization. To address this issue, we propose
LiSA , which generates label-invariant augmentations to fa-
cilitate graph OOD generalization. Instead of resorting to
graph editions, LiSA exploits Label-invariant Subgraphs of
the training graphs to construct Augmented environments.
Specifically, LiSA first designs the variational subgraph
generators to extract locally predictive patterns and con-
struct multiple label-invariant subgraphs efficiently. Then,
the subgraphs produced by different generators are col-
lected to build different augmented environments. To pro-
mote diversity among augmented environments, LiSA fur-
ther introduces a tractable energy-based regularization to
enlarge pair-wise distances between the distributions of en-
vironments. In this manner, LiSA generates diverse aug-
mented environments with a consistent predictive relation-
ship and facilitates learning an invariant GNN. Extensive
experiments on node-level and graph-level OOD bench-
marks show that LiSA achieves impressive generalization
performance with different GNN backbones. Code is avail-
able on https://github.com/Samyu0304/LiSA .
| 1. Introduction
Learning from graph-structured data is a fundamental
problem in various applications, such as 3D vision [50],
*Corresponding Authorknowledge graph reasoning [65], and social network analy-
sis [32]. Recently, the Graph Neural Networks (GNNs) [33]
have become a de facto standard in developing deep learn-
ing systems on graphs [10], showing superior performance
on point cloud classification [18], recommendation system
[56], biochemistry [29] and so on. Despite their remarkable
success, these models heavily rely on the i.i.d. assumption
that the training and testing data are independently drawn
from an identical distribution [9, 39]. When tested on out-
of-distribution (OOD) graphs ( i.e. larger graphs), GNN usu-
ally suffers from unsatisfactory performances and unstable
prediction results. Hence, handling the distribution shift for
GNNs has received increasing attention.
Many solutions have been proposed to explore the
OOD generalization problem in Euclidean space [47],
such as invariant learning [3, 14, 38], group fairness [34],
and distribution-robust optimization [49]. Recent works
mainly resort to learning an invariant classifier that per-
forms equally well in different training environments [3,
16, 37, 38]. However, the study of its counterpart problem
for non-Euclidean graphs is comparatively lacking. One
challenge is the environmental scarcity of graph-structured
data [39, 53]. Inspired by the data augmentation literature
[48, 52], some pioneering works propose to generate aug-
mented training environments by applying different graph
edition policies to the training graphs [55,57]. After training
in these environments, the GNN is expected to have better
OOD generalization ability. Nevertheless, the graph labels
may change during the graph edition since they are sensi-
tive to graph structural modifications. This causes the label
shift problem of augmented graphs. For example, methods
of graph adversarial attack usually seek to modify the graph
structure to permute the model prediction [9]. Moreover,
a small structural modification can drastically influence the
biochemical property of molecule or protein graphs [30].
We formalize the impact of the label shift in augmen-
tations on generalization using a unified structure equa-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11620
tion model [1]. Our analysis indicates that the label shift
causes inconsistent predictive relationships among the aug-
mented environments. This misguides the GNN to out-
put a perturbed prediction rather than the invariant pre-
diction, making the learned GNN hard to generalize (see
Section 3 for more details). Thus, it is crucial to gener-
ate label-invariant augmentations for graph OOD general-
ization. However, designing label-invariant graph edition
is nontrivial or even brings extensive computation, since it
requires learning class-conditional distribution for discrete
and irregular graphs. In this work, we propose a novel label-
invariant subgraph augmentation method, dubbed LiSA , for
the graph OOD generalization problem. For an input graph,
LiSA first designs the variational subgraph generators to
identify locally predictive patterns ( i.e. important nodes
or edges for the graph label) and generate multiple label-
invariant subgraphs. These subgraphs capture prediction-
relevant information with different structures, and thus con-
struct augmented environments with a consistent predictive
relationship. To promote diversity among the augmenta-
tions, we propose a tractable energy-based regularization
to enlarge the pair-wise distances between the distributions
of augmented environments. With the augmentations pro-
duced by LiSA, a GNN classifier is learned to be invariant
across these augmented environments. The GNN predictor
and variational subgraph generators are jointly optimized
with a bi-level optimization scheme [61]. LiSA is model-
agnostic and is flexible in handling both graph-level and
node-level distribution shifts. Extensive experiments indi-
cate that LiSA enjoys satisfactory performance gain over
the baselines on 7 graph classification datasets and 4 node
classification datasets. Our contributions are as follows:
• We propose a model-agnostic label-invariant subgraph
augmentation (LiSA) framework to generate aug-
mented environments with consistent predictive rela-
tionships for graph OOD generalization.
• We propose the variational subgraph generator to dis-
cover locally crucial patterns to construct the label-
invariant subgraphs efficiently.
• To further promote diversity, we further propose an
energy-based regularization to enlarge pair-wise dis-
tances between the distributions of different aug-
mented environments.
• Extensive experiments on node-level and graph-level
tasks indicate that LiSA enjoys satisfactory perfor-
mance gain over the baselines on various backbones.
|
Zhang_Quantum-Inspired_Spectral-Spatial_Pyramid_Network_for_Hyperspectral_Image_Classification_CVPR_2023 | Abstract
Hyperspectral image (HSI) classification aims at assign-
ing a unique label for every pixel to identify categories
of different land covers. Existing deep learning models
for HSIs are usually performed in a traditional learning
paradigm. Being emerging machines, quantum computers
are limited in the noisy intermediate-scale quantum (NISQ)
era. The quantum theory offers a new paradigm for de-
signing deep learning models. Motivated by the quan-
tum circuit (QC) model, we propose a quantum-inspired
spectral-spatial network (QSSN) for HSI feature extraction.
The proposed QSSN consists of a phase-prediction module
(PPM) and a measurement-like fusion module (MFM) in-
spired from quantum theory to dynamically fuse spectral
and spatial information. Specifically, QSSN uses a quantum
representation to represent an HSI cuboid and extracts joint
spectral-spatial features using MFM. An HSI cuboid and its
phases predicted by PPM are used in the quantum represen-
tation. Using QSSN as the building block, we further pro-
pose an end-to-end quantum-inspired spectral-spatial pyra-
mid network (QSSPN) for HSI feature extraction and classi-
fication. In this pyramid framework, QSSPN progressively
learns feature representations by cascading QSSN blocks
and performs classification with a softmax classifier. It is
the first attempt to introduce quantum theory in HSI pro-
cessing model design. Substantial experiments are con-
ducted on three HSI datasets to verify the superiority of the
proposed QSSPN framework over the state-of-the-art meth-
ods.
| 1. Introduction
Hyperspectral images (HSIs) are always represented by
three-order tensors that collected by remote sensors to
record the characteristics reflected by land covers. Each
HSI contains two spatial dimensions and one spectral di-
*Corresponding authors.
FLOPs/M7580859095AA (%)
QSSPN
HiT
RvT
SSAN
SSRN
SSFTTFigure 1. Performance comparison between the proposed QSSPN
framework and existing methods. Average accuracies on Indian
Pines dataset are presented.
mension to reflect abundant spectral information and spatial
context, making it different from color images. HSI classifi-
cation aims to determine land-cover categories of areas rep-
resented by every pixel according to rich spatial and spectral
information [21]. It has a broad range of applications, in-
cluding target detection [42], mining [1] and agriculture [8].
Based on the recorded HSIs, samples in different categories
may be extremely imbalanced in some data sets. This makes
HSI classification very challenging [20, 49].
In recent decades, a large number of HSI classifica-
tion models have been proposed. Existing models can be
divided into traditional learning methods and deep learn-
ing methods. Traditional classification models are simply
adopted for HSIs, such as support vector machine (SVM)
[27] and K-nearest neighbours (KNN) [19]. To achieve bet-
ter results, advanced traditional learning models, e.g., ex-
tended morphological profile (EMP) [3] and extended mul-
tiattribute profile (EMAP) [7], first conduct feature extrac-
tion and then perform classification. These methods are
two-step learning models that cannot achieve satisfactory
performance. Deep learning models are promising to si-
multaneously conduct feature extraction and classification
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9925
in an end-to-end manner [12, 34]. Benefiting from the de-
veloped technique, deep learning models for HSI classi-
fication are in blossom. A wide variety of models rely
on convolutional neural networks (CNNs). Most existing
models are designed to separately extract spatial and spec-
tral features in different branches [18, 47]. To enrich the
development of deep learning models, attention mechanic
[26, 46], graph convolution network [45] and transformer
learning [36, 43, 48] have been explored in recent works.
These works make significant improvement in architecture
design and classification performance. However, they sepa-
rately aggregate spatial and spectral information in different
modules, resulting in model redundancy and inefficient ex-
ploration of the correlation between spatial and spectral in-
formation. Besides, they are all developed from traditional
learning paradigm.
Different from traditional computers, quantum comput-
ers are emerging machines to perform quantum algorithms
[2]. Quantum computing utilizes quantum theory to process
data in quantum device [14, 28]. There are several models
of quantum computation, such as quantum circuits, quan-
tum annealing and adiabatic quantum computation. It has
been proved that quantum computers outperform classical
computers in solving certain problems [11, 33]. For ex-
ample, Shor’s algorithm can be used to solve the integer
factorization problem much faster than algorithms running
on the classical computer [33]. In the noisy intermediate-
scale quantum (NISQ) era [31], quantum computers cannot
perform complex quantum algorithms for practical appli-
cations. Fortunately, quantum computation provides a new
mathematical formalism for computing. As a new learn-
ing paradigm, quantum machine learning (QML) adopts
quantum computation to enhance classical machine learn-
ing models [4, 32]. Recently, quantum theory has been
adopted in classical algorithms and deep learning models,
with an expectation of improving computation efficiency
and outcome quality [9, 38].
Motivated by quantum theory, in this paper, we
first introduce a quantum-inspired spectral-spatial net-
work (QSSN) for HSI feature extraction. The proposed
QSSN includes a phase-prediction module (PPM) and a
measurement-like fusion module (MFM). Instead of fus-
ing spatial and spectral information in different modules,
we extract joint spatial-spectral features in the same op-
eration. A small data cuboid taken from an HSI is rep-
resented by quantum-inspired state representation, and the
amplitudes in a state vector are values in normalized input
cuboid. The corresponding phases are predicted by PPM,
dynamically modulating the spatial and spectral correla-
tions. MFM simultaneously aggregates spatial and spectral
information to generate a feature cuboid. Additionally, we
design a quantum-inspired spectral-spatial pyramid network
(QSSPN) framework with multiple cascaded QSSN blocksand a softmax classifier for HSI classification. Deep pyra-
mid structure of QSSPN gradually decreases the channels
of feature cuboids and extracts robust and expressive fea-
tures from original data for classification. This is a simple
but efficient end-to-end framework taking advantage from
both quantum theory and deep neural network.
The contributions of this article are summarized as fol-
lows.
• As far as we know, QSSN and QSSPN are the first
proposed quantum-inspired method for HSI feature ex-
traction and classification. It shows that this quantum-
inspired framework is promising.
• Motivated by QC in quantum theory, we propose
a QSSN for HSI feature extraction. QSSN aggre-
gates spatial-spectral information simultaneously us-
ing quantum-like representations and operations.
• Based on QSSN, we develop a QSSPN for HSI classi-
fication. In QSSPN, QSSN blocks stacked in a pyra-
mid manner that can extract robust and discrimina-
tive features for classification. As shown in Fig. 1,
QSSPN achieves the best classification accuracy with
low model complexity.
The rest of this paper is organized as follows: Sec. 2
reviews the related work about deep learning-based HSI
classification and quantum-inspired deep learning methods.
Sec. 3 introduces the detailed framework of QSSN and de-
scribes the proposed QSSPN for HSI classification. Sec. 4
presents the experimental results and analysis. Finally,
Sec. 5 draws the conclusion.
|
Xu_Seeing_Electric_Network_Frequency_From_Events_CVPR_2023 | Abstract
Most of the artificial lights fluctuate in response to the
grid’s alternating current and exhibit subtle variations in
terms of both intensity and spectrum, providing the poten-
tial to estimate the Electric Network Frequency (ENF) from
conventional frame-based videos. Nevertheless, the perfor-
mance of Video-based ENF (V-ENF) estimation largely re-
lies on the imaging quality and thus may suffer from signif-
icant interference caused by non-ideal sampling, motion,
and extreme lighting conditions. In this paper, we show
that the ENF can be extracted without the above limita-
tions from a new modality provided by the so-called event
camera, a neuromorphic sensor that encodes the light in-
tensity variations and asynchronously emits events with ex-
tremely high temporal resolution and high dynamic range.
Specifically, we first formulate and validate the physical
mechanism for the ENF captured in events, and then pro-
pose a simple yet robust Event-based ENF (E-ENF) estima-
tion method through mode filtering and harmonic enhance-
ment. Furthermore, we build an Event-Video ENF Dataset
(EV-ENFD) that records both events and videos in diverse
scenes. Extensive experiments on EV-ENFD demonstrate
that our proposed E-ENF method can extract more accu-
rate ENF traces, outperforming the conventional V-ENF by
a large margin, especially in challenging environments with
object motions and extreme lighting conditions. The code
and dataset are available at https://github.com/
xlx-creater/E-ENF .
| 1. Introduction
Light flicker is a nuisance in video [33] but can be ex-
ploited to detect and estimate the Electric Network Fre-
*Equal contribution. †Corresponding authors.
The research was partially supported by the National Natural Sci-
ence Foundation of China under Grants 62271354 and the Natural Sci-
ence Foundation of Hubei Province, China under Grants 2022CFB084 and
2021CFB467.
Event Stream Event Camera Estimated ENFStatic Scenes Dynamic Scenes Extreme Lighting(a) Processing Pipeline for E-ENF Extraction
(b) Video Sequences (c) Event Streams (d) ENF EstimatesFigure 1. Illustrations of (a) the processing pipeline for E-ENF
extraction and comparative experimental results under (b) different
recording environments, in terms of (c) recorded event streams and
(d) extracted V-ENF and E-ENF.
quency (ENF) [10, 11], which is the power line transmis-
sion frequency in a grid, unlocking the potential of mul-
timedia forensic verification [7, 9], electrical load monitor-
ing [23,24], recording device identification [1,28], etc. This
is enabled by the discovery that the ENF fluctuates slightly
and randomly around the nominal value ( 50or60Hz) and
consistently across the entire intra-grid nodes [13].
Existing Video-based ENF (V-ENF) extraction methods
attempt to restore the illumination fluctuations by averag-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18022
ing the pixel intensities of every frame over the video se-
quences [11]. Based on this, V-ENF estimates can be im-
proved by using a higher sample rate in the rolling shut-
ter [26,28] or an advanced frequency estimator [3,6,14,19].
However, due to the inherent characteristics of conventional
cameras, existing V-ENF extraction methods still face a se-
ries of challenges, which are summarized as follows.
•Non-ideal sampling. For global shutter videos, the
sample rate equals to the camera’s frame rate, usu-
ally at 25or30fps, which is smaller than the Nyquist
rate to record light flicker, resulting in frequency alias-
ing for ENF estimation [10]. Although rolling shutter
can increase the sample rate through the line exposure
mechanism, it also introduces the inter-frame idle time
within which no signal can be sampled [28,31], violat-
ing the uniform sampling condition.
•Motion. Motion in the video leads to the abrupt
change of pixel intensity, which is inconsistent with the
flicker pattern and produces strong interference. Note
that motion can be caused not only by object move-
ment ( e.g., walking person) but also by camera move-
ment ( e.g., hand-held camera shake).
•Extreme lighting conditions. Imaging quality of
over- or under-exposed videos (even under good light-
ing conditions) can be substantially degraded, in which
the flicker information can be totally lost [23]. Mean-
while, under extreme low-light conditions ( e.g., night
scene with insufficient lighting), one has to boost the
ISO for appropriate exposure, inevitably bringing in
ISO noises, which can severely deteriorate the ENF.
Generally, the actual content in video recordings can
have diverse scenes and objects, which become interference
with the task of ENF extraction from the content. Note
that putting a video camera still against a white wall illu-
minated by a light source is an ideal condition for V-ENF
extraction [10, 26]. However, to the very opposite, real-
world recordings hardly satisfy this condition. To date, re-
searchers and practitioners are still working intensively to
tackle the above problems [14, 27, 29].
In this paper, we show that the above-mentioned prob-
lems can be effectively solved via the proposed use of the
so-called event camera, a new modality for recording. The
event camera is a neuromorphic sensor that encodes the
light intensity variations and asynchronously emits events
with extremely high temporal resolution and high dynamic
range [8, 21].
Different from V-ENF, the proposed Event-based ENF
(E-ENF) extraction approach collects the illumination
changes and converts them into an event stream, based on
which the ENF traces can be eventually estimated, as shown
in Fig. 1(a). Thanks to the high temporal resolution andhigh dynamic range of the event camera, E-ENF can pro-
vide a sufficient sample rate and extract reliable ENF traces
even under harsh conditions, e.g., motion interference and
extreme lighting, as shown in Fig. 1(b-d).
Since no prior work has focused on extracting ENF from
events, we hereby attempt to provide the first proof-of-
concept study theoretically and experimentally demonstrat-
ing why the ENF can exist in event streams and how it can
be reliably extracted therein, in comparison with the con-
ventional V-ENF approach.
The main contributions of this paper are three-fold:
• Based on the event-sensing mechanism, we formulate
the process of the ENF fluctuation, reflected by light
flicker, being recorded and converted into an event
stream, validating the ENF capture in events.
• We propose the first method that effectively extracts
the ENF traces from events, featuring a uniform-
interval temporal sampling algorithm, a majority-
voting spatial sampling algorithm, and a harmonic se-
lection algorithm, respectively.
• We construct and open-source the first Event-Video
hybrid ENF Dataset (EV-ENFD), containing both
events and videos recorded in real-world environments
with motion and extreme lighting conditions. The code
and dataset are available at https://github.
com/xlx-creater/E-ENF .
|
Yoo_Towards_End-to-End_Generative_Modeling_of_Long_Videos_With_Memory-Efficient_Bidirectional_CVPR_2023 | Abstract
Autoregressive transformers have shown remarkable
success in video generation. However, the transformers are
prohibited from directly learning the long-term dependency
in videos due to the quadratic complexity of self-attention,
and inherently suffering from slow inference time and er-
ror propagation due to the autoregressive process. In this
paper, we propose Memory-efficient Bidirectional Trans-
former (MeBT) for end-to-end learning of long-term depen-
dency in videos and fast inference. Based on recent ad-
vances in bidirectional transformers, our method learns to
decode the entire spatio-temporal volume of a video in par-
allel from partially observed patches. The proposed trans-
former achieves a linear time complexity in both encoding
and decoding, by projecting observable context tokens into
a fixed number of latent tokens and conditioning them to
decode the masked tokens through the cross-attention. Em-
powered by linear complexity and bidirectional modeling,
our method demonstrates significant improvement over the
autoregressive transformers for generating moderately long
videos in both quality and speed. Videos and code are avail-
able at https://sites.google.com/view/mebt-cvpr2023.
| 1. Introduction
Modeling the generative process of videos is an impor-
tant yet challenging problem. Compared to images, gener-
ating convincing videos requires not only producing high-
quality frames but also maintaining their semantic struc-
tures and dynamics coherently over long timescale [11, 16,
27, 36, 38, 54].
Recently, autoregressive transformers on discrete repre-
sentation of videos have shown promising generative mod-
eling performances [11,32,51,53]. Such methods generally
involve two stages, where the video frames are first turned
into discrete tokens through vector quantization, and then
their sequential dynamics are modeled by an autoregressive
transformer. Powered by the flexibility of discrete distri-butions and the expressiveness of transformer architecture,
these methods demonstrate impressive results in learning
and synthesizing high-fidelity videos.
However, autoregressive transformers for videos suffer
from critical scaling issues in both training and inference.
During training, due to the quadratic cost of self-attention,
the transformers are forced to learn the joint distribution of
frames entirely from short videos ( e.g., 16 frames [11, 53])
and cannot directly learn the statistical dependencies of
frames over long timescales. During inference, the mod-
els are challenged by two issues of autoregressive gener-
ative process – its serial process significantly slows down
the inference speed, and perhaps more importantly, autore-
gressive prediction is prone to error propagation where the
prediction error of the frames accumulates over time.
To (partially) address the issues, prior works proposed
improved transformers for generative modeling of videos,
which are categorized as the following: (a)Employing
sparse attention to improve scaling during training [12, 16,
51],(b)Hierarchical approaches that employ separate mod-
els in different frame rates to generate long videos with
a smaller computation budget [11, 16], and (c)Remov-
ing autoregression by formulating the generative process as
masked token prediction and training a bidirectional trans-
former [12, 13]. While each approach is effective in ad-
dressing specific limitations in autoregressive transformers,
none of them provides comprehensive solutions to afore-
mentioned problems – (a, b) still inherits the problems in
autoregressive inference and cannot leverage the long-term
dependency by design due to the local attention window,
and(c)is not appropriate to learn long-range dependency
due to the quadratic computation cost. We believe that an
alternative that jointly resolves all the issues would provide
a promising approach towards powerful and efficient video
generative modeling with transformers.
In this work, we propose an efficient transformer for
video synthesis that can fully leverage the long-range de-
pendency of video frames during training, while being able
to achieve fast generation and robustness to error propaga-
tion. We achieve the former with a linear complexity ar-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22888
chitecture that still imposes dense dependencies across all
timesteps, and the latter by removing autoregressive seri-
alization through masked generation with a bidirectional
transformer. While conceptually simple, we show that ef-
ficient dense architecture and masked generation are highly
complementary, and when combined together, lead to sub-
stantial improvements in modeling longer videos compared
to previous works both in training and inference. The con-
tributions of this paper are as follows:
• We propose Memory-efficient Bidirectional Trans-
former (MeBT) for generative modeling of video. Un-
like prior methods, MeBT can directly learn long-
range dependency from training videos while enjoying
fast inference and robustness in error propagation.
• To train MeBT for moderately long videos, we propose
a simple yet effective curriculum learning that guides
the model to learn short- to long-term dependencies
gradually over training.
• We evaluate MeBT on three challenging real-world
video datasets. MeBT achieves a performance compet-
itive to state-of-the-arts for short videos of 16 frames,
and outperforms all for long videos of 128 frames
while being considerably more efficient in memory
and computation during training and inference.
|
Zhang_Accelerating_Dataset_Distillation_via_Model_Augmentation_CVPR_2023 | Abstract
Dataset Distillation (DD), a newly emerging field, aims
at generating much smaller but efficient synthetic training
datasets from large ones. Existing DD methods based on
gradient matching achieve leading performance; however,
they are extremely computationally intensive as they re-
quire continuously optimizing a dataset among thousands
of randomly initialized models. In this paper, we assume
that training the synthetic data with diverse models leads
to better generalization performance. Thus we propose
twomodel augmentation techniques, i.e. using early-stage
models andparameter perturbation to learn an informative
synthetic set with significantly reduced training cost. Exten-
sive experiments demonstrate that our method achieves up
to 20×speedup and comparable performance on par with
state-of-the-art methods.
| 1. Introduction
Dataset Distillation (DD) [3, 48] or Dataset Condensa-
tion [55, 56], aims to reduce the training cost by generat-
ing a small but informative synthetic set of training exam-
ples; such that the performance of a model trained on the
small synthetic set is similar to that trained on the origi-
nal, large-scale dataset. Recently, DD has become an in-
creasingly more popular research topic, and has been ex-
plored in a variety of contexts, including federated learn-
ing [17, 42], continual learning [33, 40], neural architecture
search [43, 57], medical computing [25, 26] and graph neu-
ral networks [21, 30].
DD has been typically cast as a meta-learning prob-
lem [16] involving bilevel optimization. For instance,
Wang et al . [48] formulate the network parameters as a
function of the learnable synthetic set in the inner-loop
*Equal contribution.
†Corresponding author: Dongkuan Xu.
20x
1x
0.75x1x 5x 10x
0.0 0.4 0.8 1.2
GPU Hours for Dataset Condensation (hours in log10 scale)40506070Test Accuracy (%)DMOurs5 Ours10Ours20TMIDC
DSA
DCFigure 1. Performances of condensed datasets for training
ConvNet-3 v.s. GPU hours to learn the 10 images per class con-
densed CIFAR-10 datasets with a single RTX-2080 GPU. Ours 5,
Ours 10, and Ours 20accelerates the training speed of the state-of-
the-art method IDC [22] 5×,10×, and 20×faster.
optimization; then optimize the synthetic set by minimiz-
ing classification loss on the real data in the outer-loop.
This recursive computation hinders its application to real-
world large-scale model training, which involves thousands
to millions of gradient descent steps. Several methods have
been proposed to improve the DD method by introducing
ridge regression loss [2, 36], trajectory matching loss [3],
etc. To avoid unrolling the recursive computation graph,
Zhao et al. [57] propose to learn synthetic set by matching
gradients generated by real and synthetic data when training
deep networks. Based on this surrogate goal, several meth-
ods have been proposed to improve the informativeness or
compatibility of synthetic datasets from other perspectives,
ranging from data augmentation [55], contrastive signal-
ing [24], resolution reduction [22], and bit encoding [41].
Although model training on a small synthetic set is fast,
the dataset distillation process is typically expensive. For
instance, the state-of-the-art method IDC [22] takes ap-
proximately 30 hours to condense 50,000 CIFAR-10 im-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11950
ages into 500 synthetic images with a single RTX-2080
GPU, which is equivalent to the time it takes to train 60
ConvNet-3 models on the original dataset. Furthermore,
the distillation time cost will rapidly increase for large-scale
datasets e.g. ImageNet-1K, which prevents its application
in computation-limited environments like end-user devices.
Prior work [56] on reducing the distillation cost results in
significant regression from the state-of-the-art performance.
In this paper, we aim to speed up the dataset distillation
process, while preserving even improving the testing per-
formance over state-of-the-art methods.
Prior works are computationally expensive as they fo-
cus on generalization ability such that the learned synthetic
set is useful to train many different networks as opposed
to a targeted network. This requires optimizing the syn-
thetic set over thousands of differently initialized networks.
For example, IDC [22] learns the synthetic set over 2000
randomly initialized models, while the trajectory matching
method (TM) [3] optimizes the synthetic set for 10000 dis-
tillation steps with 200 pre-trained expert models. Dataset
distillation, which learns the synthetic data that is generaliz-
able to unseen models, can be considered as an orthogonal
approach to model training which learns model parameters
that are generalizable to unseen data. Similarly, training the
synthetic data with diverse models leads to better general-
ization performance. This intuitive idea leads to the follow-
ing research questions:
Question 1. How to design the candidate pool of models
to learn synthetic data, for instance, consisting of randomly
initialized, early-stage or well-trained models?
Prior works [3, 22, 48, 57] use models from all training
stages. The underlying assumption is that models from all
training stages have similar importance. Zhao et al . [56]
show that synthetic sets with similar generalization perfor-
mance can be learned with different model parameter dis-
tributions, given an objective function in the form of feature
distribution matching between real and synthetic data. In
this paper, we take a closer look at this problem and show
that learning synthetic data on early-stage models is more
efficient for gradient/parameter matching based dataset dis-
tillation methods.
Question 2. Can we learn a good synthetic set using only
a few models?
Our goal is to learn a synthetic set with a small number
of (pre-trained) models to minimize the computational cost.
However, using fewer models leads to poor generalization
ability of the synthetic set. Therefore, we propose to apply
parameter perturbation on selected early-stage models to
incorporate model diversity and improve the generalization
ability of the learned synthetic set.
In a nutshell, we propose two model augmentation
techniques to accelerate the training speed of dataset dis-
tillation, namely using early-stage models andparameterperturbation to learn an informative synthetic set with sig-
nificantly less training cost. As illustrated in Fig. 1., our
method achieves up to 20 ×speedup and comparable per-
formance on par with state-of-the-art DD methods.
|
Zhang_Learning_Emotion_Representations_From_Verbal_and_Nonverbal_Communication_CVPR_2023 | Abstract
Emotion understanding is an essential but highly chal-
lenging component of artificial general intelligence. The
absence of extensive annotated datasets has significantly
impeded advancements in this field. We present Emotion-
CLIP , the first pre-training paradigm to extract visual emo-
tion representations from verbal and nonverbal communi-
cation using only uncurated data. Compared to numerical
labels or descriptions used in previous methods, commu-
nication naturally contains emotion information. Further-
more, acquiring emotion representations from communica-
tion is more congruent with the human learning process.
We guide EmotionCLIP to attend to nonverbal emotion cues
through subject-aware context encoding and verbal emo-
tion cues using sentiment-guided contrastive learning. Ex-
tensive experiments validates the effectiveness and transfer-
ability of EmotionCLIP . Using merely linear-probe evalua-
tion protocol, EmotionCLIP outperforms the state-of-the-
art supervised visual emotion recognition methods and ri-
vals many multimodal approaches across various bench-
marks. We anticipate that the advent of EmotionCLIP will
address the prevailing issue of data scarcity in emotion
understanding, thereby fostering progress in related do-
mains. The code and pre-trained models are available at
https://github.com/Xeaver/EmotionCLIP.
| 1. Introduction
If artificial intelligence (AI) can be equipped with emo-
tional intelligence (EQ), it will be a significant step toward
developing the next generation of artificial general intelli-
gence [46, 92]. The combination of emotion and intelli-
gence distinguishes humans from other animals. The ability
to understand, use, and express emotions will significantly
facilitate the interaction of AI with humans and the environ-
ment [20, 48–50], making it the foundation for a wide vari-
ety of HCI [3], robotics [11], and autonomous driving [31]
applications.
*equal contribution
Conversation:
- Dad. Who is Dede?
- Jesus. She was the love of my
life and I was too stupid to
realized it. I lost her because
of something so dumb.0 1 1 0 0Categorical Label:
Shocked Regretful
Description:
An old man is chatting with his
son while eating at a restaurant.
Figure 1. Emotions emerge naturally in human communica-
tion through verbal and nonverbal cues. The rich semantic de-
tails within the expression can hardly be represented by human-
annotated categorical labels and descriptions in current datasets.
Artificial emotional intelligence (AEI) research is still
in its nascency [30, 73]. The recent emergence of pre-
trained models in CV [10, 24, 62] and NLP [7, 16, 33, 68]
domains has ushered in a new era of research in related
subjects. By training on large-scale unlabeled data in a self-
supervised manner, the model learns nontrivial representa-
tions that generalize to downstream tasks [42,62]. Unfortu-
nately, such a technique remains absent from AEI research.
The conventional approaches in visual emotion understand-
ing have no choice but to train models from scratch, or
leverage models from less-relevant domains [27, 66], suf-
fering from data scarcity [29, 45]. The lack of pre-trained
models greatly limits the development of AEI research.
Research in neuroscience and psychology offers insights
for addressing this problem. Extending from the capabil-
ities that have been coded genetically, humans learn emo-
tional expressions through daily interaction and communi-
cation as early as when they are infants. It has been shown
that both vision [58] and language [41] play crucial roles in
this learning process. By absorbing and imitating expres-
sions from others, humans eventually master the necessary
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18993
feelings to comprehend emotional states by observing and
analyzing facial expressions, body movements, contextual
environments, etc.
Inspired by how humans comprehend emotions, we pro-
pose a new paradigm for emotion understanding that learn
directly from human communication. The core of our idea
is to explore the consistency between verbal and nonverbal
affective cues in daily communication. Fig. 1 shows how
communication reveals emotion. Our method that learns
from communication is not only aligned with the human
learning process but also has several advantages:
1)Our method bypasses the problems in emotion data
collection by leveraging uncurated data from daily com-
munication. Existing emotion understanding datasets are
mainly annotated using crowdsourcing [29, 45, 77]. For
image classification tasks, it is straightforward for annota-
tors to agree on an image’s label due to the fact that the
label is determined by certain low-level visual character-
istics. However, crowdsourcing participants usually have
lower consensus on producing emotion annotations due to
the subjectivity and subtlety of affective labels [90]. This
phenomenon makes it extremely difficult to collect accurate
emotion annotations on a large scale. Our approach does
not rely on human annotations, allowing us to benefit from
nearly unlimited web data.
2)Our use of verbal expressions preserves fine-grained
semantics to the greatest extent possible. Limited by the
data collection strategy, existing datasets usually only con-
tain annotations for a limited number of emotion categories,
which is far from covering the space of human emotional
expression [86]. Moreover, the categorical labels com-
monly used in existing datasets fail to precisely represent
the magnitude or intensity of a certain emotion.
3)Our approach provides a way to directly model ex-
pressed emotion. Ideally, AEI should identify the individ-
ual’s emotional state, i.e., the emotion the person desires
to express. Unfortunately, it is nearly impossible to col-
lect data on this type of “expressed emotion” on a large
scale. Instead, the current practice is to collect data on “per-
ceived emotion” to approximate the person’s actual emo-
tional state, which inevitably introduces noise and bias to
labels.
In general, learning directly from how humans express
themselves is a promising alternative that gives a far broader
source of supervision and a more comprehensive represen-
tation. This strategy is closely analogous to the human
learning process and provides an efficient solution for ex-
tracting emotion representations from uncurated data.
We summarize our main contributions as follows:
• We introduce EmotionCLIP, the first vision-language
pre-training paradigm using uncurated data to the vi-
sual emotion understanding domain.
• We propose two techniques to guide the model to cap-ture salient emotional expressions from human verbal
and nonverbal communication.
• Extensive experiments and analysis demonstrate the
superiority and transferability of our method on vari-
ous downstream datasets in emotion understanding.
|
Yuan_You_Are_Catching_My_Attention_Are_Vision_Transformers_Bad_Learners_CVPR_2023 | Abstract
Vision Transformers (ViTs), which made a splash in the
field of computer vision (CV), have shaken the dominance
of convolutional neural networks (CNNs). However, in
the process of industrializing ViTs, backdoor attacks have
brought severe challenges to security. The success of ViTs
benefits from the self-attention mechanism. However, com-
pared with CNNs, we find that this mechanism of capturing
global information within patches makes ViTs more sensi-
tive to patch-wise triggers. Under such observations, we
delicately design a novel backdoor attack framework for
ViTs, dubbed BadViT, which utilizes a universal patch-wise
trigger to catch the model’s attention from patches bene-
ficial for classification to those with triggers, thereby ma-
nipulating the mechanism on which ViTs survive to confuse
itself. Furthermore, we propose invisible variants of BadViT
to increase the stealth of the attack by limiting the strength
of the trigger perturbation. Through a large number of ex-
periments, it is proved that BadViT is an efficient backdoor
attack method against ViTs, which is less dependent on the
number of poisons, with satisfactory convergence, and is
transferable for downstream tasks. Furthermore, the risks
inside of ViTs to backdoor attacks are also explored from
the perspective of existing advanced defense schemes.
| 1. Introduction
Transformers, which are all-powerful in the field of nat-
ural language processing (NLP) [6, 13, 57], have recently
set off a wave of frenetic research in computer vision (CV).
Thanks to the self-attention mechanism, vision transform-
ers (ViTs) have broken the perennial domination of convo-
lutional neural networks (CNNs) [17], and have been de-
*Corresponding author.veloped in hot areas like image classification [36, 53, 66],
object detection [3, 7] and semantic segmentation [46, 64].
The architecture optimization researches of ViTs are also
continuously improving the performance and efficiency
[26, 40, 45, 53], and providing vitality for advancing the de-
ployment of ViTs in industry.
Unfortunately, manifold threats in deep learning pose se-
vere challenges to ViTs. For instance, adversarial attacks
[10, 11, 25, 28, 31, 32, 49, 50, 76] confuse the deep model to
make wrong predictions by adding subtle perturbations to
the input. In addition, backdoor attacks are also extremely
threatening to deep models [22, 27, 48, 51, 68]. More and
more deep learning tasks are “outsourced” training or di-
rectly fine-tuning on pre-trained models [22, 75], allowing
attackers to implant backdoors into the model by establish-
ing a strong association between the trigger and the attack
behavior. In response, a growing number of researchers
have paid attention to the security of ViTs under adversarial
attacks [1,4,20,39,44] and backdoor attacks [16,37,47,73].
However, previous ViT backdoor works have not system-
atically compared with CNN to elucidate the vulnerability
source of ViTs to backdoor attacks, and have not consid-
ered balancing attack concealment and attack benefit, so
that triggers can be easily detected by the naked eye.
To fill this gap, we systemically discuss the robustness of
ViTs and CNNs under basic backdoor attacks with differ-
ent trigger settings and find that ViTs seem to be more vul-
nerable to patch-wise triggers rather than image-blending
triggers. Delving into the essence of ViTs, images are di-
vided into patches as tokens to calculate attentions, which
can capture more interaction information between patches
at the global level than CNNs [44]. Thus the patch-wise
perturbation has been shown to sufficiently affect the self-
attention mechanism of ViTs [20] and make ViTs weaker
learners than CNNs. Inspired by this, a natural and interest-
ing question is whether backdoor attacks with the patch-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24605
wise trigger are resultful in ViTs. Accordingly, we pro-
pose BadViT, a well-designed backdoor attack framework
against ViTs. Through the optimization process, the univer-
sal adversarial patch is generated as a trigger, which can be
better caught by the model through the self-attention mech-
anism of ViTs to tighten the connection between the trig-
ger and the target class. To achieve invisible attacks, we
limit the perturbation strength of the adversarial patch-wise
trigger and adopt the blending strategy [12] instead of past-
ing. Moreover, we adopt a ViTs backdoor defense Patch-
Drop [16], and two state-of-the-art defenses in CNNs, Neu-
ral Cleanse [59] and FinePruning [33], to explore the vul-
nerability of ViTs against our BadViT.
Our main contributions are as follows:
• We explore the robustness of ViTs compared with
CNNs against backdoor attacks with different triggers.
• We propose our BadViT as well as its invisible version
and verify the validity through abundant experiments.
• We show the effect of BadViTs under three advanced
defense methods, and further discuss the characteris-
tics of ViTs under backdoor attacks through patch pro-
cessing, reverse engineering, and pruning.
We believe this paper will offer readers a new under-
standing of the robustness of ViTs against backdoor attacks,
and provide constructive insights into the follow-up ViTs
system optimization and backdoor defense efforts.
|
Zhang_DeSTSeg_Segmentation_Guided_Denoising_Student-Teacher_for_Anomaly_Detection_CVPR_2023 | Abstract
Visual anomaly detection, an important problem in com-
puter vision, is usually formulated as a one-class classifi-
cation and segmentation task. The student-teacher (S-T)
framework has proved to be effective in solving this chal-
lenge. However, previous works based on S-T only empir-
ically applied constraints on normal data and fused multi-
level information. In this study, we propose an improved
model called DeSTSeg, which integrates a pre-trained
teacher network, a denoising student encoder-decoder, and
a segmentation network into one framework. First, to
strengthen the constraints on anomalous data, we intro-
duce a denoising procedure that allows the student net-
work to learn more robust representations. From synthet-
ically corrupted normal images, we train the student net-
work to match the teacher network feature of the same im-
ages without corruption. Second, to fuse the multi-level S-T
features adaptively, we train a segmentation network with
rich supervision from synthetic anomaly masks, achieving a
substantial performance improvement. Experiments on the
industrial inspection benchmark dataset demonstrate that
our method achieves state-of-the-art performance, 98.6%
on image-level AUC, 75.8% on pixel-level average preci-
sion, and 76.4% on instance-level average precision.
| 1. Introduction
Visual anomaly detection (AD) with localization is an
essential task in many computer vision applications such
as industrial inspection [24, 36], medical disease screening
[27, 32], and video surveillance [18, 20]. The objective of
these tasks is to identify both corrupted images and anoma-
lous pixels in corrupted images. As anomalous samples oc-
cur rarely, and the number of anomaly types is enormous,
it is unlikely to acquire enough anomalous samples with all
possible anomaly types for training. Therefore, AD tasks
were usually formulated as a one-class classification andsegmentation, using only normal data for model training.
The student-teacher (S-T) framework, known as knowl-
edge distillation, has proven effective in AD [3, 9, 26, 31,
33]. In this framework, a teacher network is pre-trained
on a large-scale dataset, such as ImageNet [10], and a
student network is trained to mimic the feature represen-
tations of the teacher network on an AD dataset with nor-
mal samples only. The primary hypothesis is that the stu-
dent network will generate different feature representations
from the teacher network on anomalous samples that have
never been encountered in training. Consequently, anoma-
lous pixels and images can be recognized in the inference
phase. Notably, [26, 31] applied knowledge distillation
at various levels of the feature pyramid so that discrepan-
cies from multiple layers were aggregated and demonstrated
good performance. However, there is no guarantee that the
features of anomalous samples are always different between
S-T networks because there is no constraint from anoma-
lous samples during the training. Even with anomalies, the
student network may be over-generalized [22] and output
similar feature representations as those by the teacher net-
work. Furthermore, aggregating discrepancies from multi-
level in an empirical way, such as sum or product, could be
suboptimal. For instance, in the MVTec AD dataset under
the same context of [31], we observe that for the category of
transistor , employing the representation from the last layer,
with 88.4% on pixel-level AUC, outperforms that from the
multi-level features, with 81.9% on pixel-level AUC.
To address the problem mentioned above, we propose
DeSTSeg , illustrated in Fig. 1, which consists of a denois-
ing student network, a teacher network, and a segmenta-
tion network. We introduce random synthetic anomalies
into the normal images and then use these corrupted im-
ages1for training. The denoising student network takes
a corrupted image as input, whereas the teacher network
takes the original clean image as input. During training,
1All samples shown in this paper are licensed under the CC BY-NC-SA
4.0.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3914
Figure 1. Overview of DeSTSeg. Synthetic anomalous images are generated and used during training. In the first step (a), the student
network with synthetic input is trained to generate similar feature representations as the teacher network from the clean image. In the
second step (b), the element-wise product of the student and teacher networks’ normalized outputs are concatenated and utilized to train
the segmentation network. The segmentation output is the predicted anomaly score map.
the feature discrepancy between the two networks is min-
imized. In other words, the student network is trained to
perform denoising in the feature space. Given anomalous
images as input to both networks, the teacher network en-
codes anomalies naturally into features, while the trained
denoising student network filters anomalies out of feature
space. Therefore, the two networks are reinforced to gen-
erate distinct features from anomalous inputs. For the ar-
chitecture of the denoising student network, we decided to
use an encoder-decoder network for better feature denoising
instead of adopting an identical architecture as the teacher
network. In addition, instead of using empirical aggrega-
tion, we append a segmentation network to fuse the multi-
level feature discrepancies in a trainable manner, using the
generated binary anomaly mask as the supervision signal.
We evaluate our method on a benchmark dataset for sur-
face anomaly detection and localization, MVTec AD [2].
Extensive experimental results show that our method out-
performs the state-of-the-art methods on image-level, pixel-
level, and instance-level anomaly detection tasks. We also
conduct ablation studies to validate the effectiveness of our
proposed components.
Our main contributions are summarized as follows. (1)We propose a denoising student encoder-decoder, which is
trained to explicitly generate different feature representa-
tions from the teacher with anomalous inputs. (2) We em-
ploy a segmentation network to adaptively fuse the multi-
level feature similarities to replace the empirical inference
approach. (3) We conduct extensive experiments on the
benchmark dataset to demonstrate the effectiveness of our
method for various tasks.
|
Zhang_Inversion-Based_Style_Transfer_With_Diffusion_Models_CVPR_2023 | Abstract
The artistic style within a painting is the means of ex-
pression, which includes not only the painting material,
colors, and brushstrokes, but also the high-level attributes,
including semantic elements and object shapes. Previous
arbitrary example-guided artistic image generation meth-
ods often fail to control shape changes or convey elements.
Pre-trained text-to-image synthesis diffusion probabilistic
models have achieved remarkable quality but often require
extensive textual descriptions to accurately portray the at-
tributes of a particular painting. The uniqueness of an art-
work lies in the fact that it cannot be adequately explained
with normal language. Our key idea is to learn the artis-
tic style directly from a single painting and then guide the
synthesis without providing complex textual descriptions.
Specifically, we perceive style as a learnable textual de-
scription of a painting. We propose an inversion-based style
transfer method (InST), which can efficiently and accurately
learn the key information of an image, thus capturing and
transferring the artistic style of a painting. We demonstrate
the quality and efficiency of our method on numerous paint-
ings of various artists and styles. Codes are available at
https://github.com/zyxElsa/InST .
*Corresponding author: [email protected] | 1. Introduction
If a photo speaks 1000 words, then every painting tells
a story. A painting contains the engagement of an artist’s
own creation. The artistic style of a painting can be the per-
sonalized textures and brushstrokes, the portrayed beautiful
moment, or some particular semantic elements. All these
artistic factors are difficult to describe with words. There-
fore, when we wish to utilize a favorite painting to create
new digital artworks that can imitate the original idea of the
artist, the task turns into example-guided artistic image gen-
eration.
Generating artistic image(s) from given example(s) has
attracted great interest in recent years. A typical task is
style transfer [1, 6, 8, 17, 34, 55, 59], which can create a new
artistic image from an arbitrary input pair of natural images
and painting image, by combining the content of the nat-
ural image and the style of the painting image. However,
particular artistic factors such as object shape and semantic
elements are difficult to be transferred (see Figures 2(b) and
2(e)). Text guided stylization [14, 16, 29, 38] produces an
artistic image from a natural image and a text prompt, but
the text prompt for the target style can usually be a rough
description only of the material (e.g., “oil”, “watercolor”
or “sketch”), art movement(e.g. “Impressionism” or “Cu-
bism”), artist (e.g., “Vincent van Gogh” or “Claude Monet”)
or a famous artwork (e.g., “Starry Night” or “The Scream”).
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10146
‘[C]’
‘A painting of a girl reads a
book in the style of [C]’
Target
Style
CNN-Based
Style Transfer
Model
Style
Image
Text-to-Image
Generative
Model‘A painting of a girl reads a book
in the style of Tony Toscani’
Text-to-Image
Generative
Model
Text-to-Image
Generative
Model
(a)
Text-to-Image
Generative
ModelRandom
NoiseRandom
NoiseRandom
Noise
Vit-Based
Style Transfer
Model
Style
Image
(b) (c)
(d) (e) (f)‘A painting of a girl reads a book
in the style of Modernism’
’Figure 2. Concept differences between text-to-image synthesis [50], classical style transfer [7, 59] and our InST.
Diffusion-based methods [9,22,23,26,36,54] generate high-
quality and diverse artistic images on the basis of a text
prompt, with or without image examples. In addition to the
input image, a detailed auxiliary textual input is required to
guide the generation process if we want to reproduce some
vivid contents and styles. However, the creative idea of a
specific painting may still be difficult to reproduce in the
result.
In this paper, we propose a novel example-guided artis-
tic image generation framework (i.e., inversion-based style
transfer, InST) which related to style transfer and text-to-
image synthesis, to mitigate all the above problems. Given
only a single input painting image, our method can learn
and transfer its style to a natural image with a very simple
text prompt (see Figures 1 and 2(f)). The resulting image
exhibits very similar artistic attributes to those of the orig-
inal painting, including material, brushstrokes, colors, ob-
ject shapes and semantic elements, without losing diversity.
Furthermore, the content of the resulting image can also be
controlled by providing a text description (see Figure 2(c)).
To achieve this goal, we need to obtain the representa-
tion of the image style, which refers to the set of attributes
that appear in the high-level textual description of the im-
age. We define the textual descriptions as “new words” that
do not exist in the normal language and obtain the embed-
dings via inversion method. We exploit the recent success
of diffusion models [42,50] and inversion [2,15]. We adapt
diffusion models in our work as the backbone to be inverted
and as a generator in image-to-image and text-to-image syn-
thesis. Specifically, we propose an efficient and accurate
textual inversion based on the attention mechanism, which
can quickly learn key features from an image, and a stochas-
tic inversion to maintain the semantic of the content image.
We use CLIP [39] image encoder and learn key information
of the image through multi-layer cross-attention. Taking an
artistic image as a reference, the attention-based inversion
module is fed with its image embedding and then gives its
textual embedding. The diffusion models conditioning onthe textual embedding can produce new images with the
learned style of the reference.
To demonstrate the effectiveness of InST, we conduct
comprehensive experiments on numerous images of various
artists and styles. The experiments show that InST produces
outstanding results, generating artistic images that imitate
the style attributes to a high degree and with a content that
is consistent with that of the input natural images or text de-
scriptions. InST demonstrates much improved visual qual-
ity and artistic consistency compared with state-of-the-art
approaches. These outcomes indicate the generality, preci-
sion, and adaptability of the proposed method.
|
Yang_Pruning_Parameterization_With_Bi-Level_Optimization_for_Efficient_Semantic_Segmentation_on_CVPR_2023 | Abstract
With the ever-increasing popularity of edge devices, it is
necessary to implement real-time segmentation on the edge
for autonomous driving and many other applications. Vi-
sion Transformers (ViTs) have shown considerably stronger
results for many vision tasks. However, ViTs with the full-
attention mechanism usually consume a large number of
computational resources, leading to difficulties for real-
time inference on edge devices. In this paper, we aim to de-
rive ViTs with fewer computations and fast inference speed
to facilitate the dense prediction of semantic segmentation
on edge devices. To achieve this, we propose a pruning pa-
rameterization method to formulate the pruning problem of
semantic segmentation. Then we adopt a bi-level optimiza-
tion method to solve this problem with the help of implicit
gradients. Our experimental results demonstrate that we
can achieve 38.9 mIoU on ADE20K val with a speed of 56.5
FPS on Samsung S21, which is the highest mIoU under the
same computation constraint with real-time inference.
| 1. Introduction
Inspired by the extraordinary performance of Deep Neu-
ral Networks (DNNs), DNNs have been applied to various
tasks. In this work, we focus on semantic segmentation,
which aims to assign a class label to each pixel of an image
to perform a dense prediction. It plays an important role in
many real-world applications, such as autonomous driving.
However, as a dense prediction task, segmentation models
usually have complicated multi-scale feature fusion struc-
tures with large feature sizes, leading to tremendous mem-
ory and computation overhead with slow inference speed.
To reduce the memory and computation cost, certain
lightweight CNN architectures [21, 40, 66] are designed for
efficient segmentation. Besides CNNs, inspired by the re-
cent superior performance of vision transformers (ViTs)
*These authors contributed equally to this work.
Figure 1. Comparison of accuracy versus FPS on ADE20K.
[17], some works [10, 11, 72] adopt ViTs in segmentation
tasks to explore self-attention mechanism with the global
receptive field. However, it is still difficult for ViTs to re-
duce the computation cost of the dense prediction for seg-
mentation with large feature sizes.
With the wide spread of edge devices such as mobile
phones, it is essential to perform real-time inference of seg-
mentation on edge devices in practice. To facilitate mo-
bile segmentation, the state-of-the-art work TopFormer [68]
adopts a token pyramid transformer to produce scale-aware
semantic features with tokens from various scales. It signif-
icantly outperforms CNN- and ViT-based networks across
different semantic segmentation datasets and achieves a
good trade-off between accuracy and latency. However, it
only partially optimizes the token pyramid module, which
costs most of the computations and latency.
In this work, we propose a pruning parameterization
method with bi-level optimization to further enhance the
performance of TopFormer. Our objective is to search for
a suitable layer width for each layer in the token pyramid
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15402
module, which is the main cost of computations and latency
(over 60%). To achieve this, we first formulate the problem
with pruning parameterization to build a pruning framework
with a soft mask as a representation of the pruning policy.
With this soft mask, we further adopt thresholding to con-
vert the soft mask into a binary mask so that the model is
trained with actual pruned weights to obtain pruning results
directly. This is significantly different from other meth-
ods [27, 47] to train with unpruned small non-zero weights
and use fine-tuning to mitigate the performance degradation
after applying pruning. Besides, to update the soft mask as
long as the pruning policy, we adopt to straight though es-
timator (STE) method to make the soft mask differentiable.
Thus, we can build the pruning parameterization framework
with minimal overhead.
Based on this framework, we need to search the best-
suited layer width for each layer in the token pyramid mod-
ule. It is non-trivial to perform the search. As the to-
ken pyramid module needs to extract multi-scale informa-
tion from multiple spatial resolutions, the large hierarchical
search space leads to difficulties of convergence. To resolve
this problem, we adopt a bi-level optimization method. In
the outer optimization, we try to obtain the pruning pol-
icy based on the pruning parameters (the soft mask). In
the inner optimization, the optimized model weights with
the best segmentation performance under this soft mask can
be obtained. Compared with a typical pruning method, our
work incorporates the implicit gradients with second-order
derivatives to further guide the update of the soft mask
and achieve better performance. Our experimental results
demonstrate that we can achieve 38.9 mIoU (mean class-
wise intersection-over-union) on the ADE20K dataset with
a speed of 56.5 FPS on Samsung S21, which is the highest
mIoU under the same computation constraint with real-time
inference speed. As demonstrated in Figure 1, our models
can achieve a better tradeoff between the mIoU and speed.
We summarize our contributions below,
• We propose a pruning parameterization method to build
a pruning framework with a soft mask. We further use
a threshold-based method to convert the soft mask into
the binary mask to perform actual pruning during model
training and inference. Besides, STE is adopted to update
the soft mask efficiently through gradient descent opti-
mizers.
• To solve the pruning problem formulated with the frame-
work of pruning parameterization, we propose a bi-level
optimization method to utilize implicit gradients for bet-
ter results. We show that the second-order derivatives in
the implicit gradients can be efficiently obtained through
first-order derivatives, saving computations and memory.
• Our experimental results demonstrate that we can achieve
the highest mIoU under the same computation constraint
on various datasets. Specifically, we can achieve 38.9mIoU on the ADE20K dataset with a real-time inference
speed of 56.5 FPS on the Samsung S21.
|
Zhang_CompletionFormer_Depth_Completion_With_Convolutions_and_Vision_Transformers_CVPR_2023 | Abstract
Given sparse depths and the corresponding RGB images,
depth completion aims at spatially propagating the sparse
measurements throughout the whole image to get a dense
depth prediction. Despite the tremendous progress of deep-
learning-based depth completion methods, the locality of
the convolutional layer or graph model makes it hard for
the network to model the long-range relationship between
pixels. While recent fully Transformer-based architecture
has reported encouraging results with the global recep-
tive field, the performance and efficiency gaps to the well-
developed CNN models still exist because of its deteriora-
tive local feature details. This paper proposes a Joint Con-
volutional Attention and Transformer block (JCAT), which
deeply couples the convolutional attention layer and Vision
Transformer into one block, as the basic unit to construct
our depth completion model in a pyramidal structure. This
hybrid architecture naturally benefits both the local connec-
tivity of convolutions and the global context of the Trans-
former in one single model. As a result, our Completion-
Former outperforms state-of-the-art CNNs-based methods
on the outdoor KITTI Depth Completion benchmark and
indoor NYUv2 dataset, achieving significantly higher effi-
ciency (nearly 1/3 FLOPs) compared to pure Transformer-
based methods. Code is available at https://github.
com/youmi-zym/CompletionFormer .
| 1. Introduction
Active depth sensing has achieved significant gains in
performance and demonstrated its utility in numerous ap-
plications, such as autonomous driving and augmented re-
ality. Although depth maps captured by existing commer-
cial depth sensors ( e.g., Microsoft Kinect [ 23], Intel Re-
alSense [ 11]) or depths points within the same scanning
line of LiDAR sensors are dense, the distance between
valid/correct depth points could still be far owing to thesensor noise, challenging conditions such as transparent,
shining, and dark surfaces, or the limited number of scan-
ning lines of LiDAR sensors. To address these issues, depth
completion [ 2,16,26,31], which targets at completing and
reconstructing the whole depth map from sparse depth mea-
surements and a corresponding RGB image ( i.e., RGBD),
has gained much attention in the latest years.
For depth completion, one key point is to get the depth
affinity among neighboring pixels so that reliable depth la-
bels can be propagated to the surroundings [ 2,3,8,16,26].
Based on the fact that the given sparse depth could be
highly sparse due to noise or even no measurement be-
ing returned from the depth sensor, it requires depth com-
pletion methods to be capable of 1) detecting depth out-
liers by measuring the spatial relationship between pixels
in both local and global perspectives; 2) fusing valid depth
values from close or even extremely far distance points.
All these properties ask the network for the potential to
capture both local and global correlations between pixels.
Current depth completion networks collect context infor-
mation with the widely used convolution neural networks
(CNNs) [ 2,3,8,16,26,29,37,51] or graph neural net-
work [ 42,49]. However, both the convolutional layer and
graph models can only aggregate within a local region, e.g.
square kernel in 3×3for convolution and kNN-based neigh-
borhood for graph models [ 42,49], making it still tough to
model global long-range relationship, in particular within
the shallowest layers of the architecture. Recently, Guide-
Former [ 31] resorts fully Transformer-based architecture to
enable global reasoning. Unfortunately, since Vision Trans-
formers project image patches into vectors through a single
step, this causes the loss of local details, resulting in ignor-
ing local feature details in dense prediction tasks [ 28,43].
For depth completion, the limitations affecting pure CNNs
or Transformer based networks also manifest, as shown in
Fig.1. Despite anydistance the reliable depth points could
be distributed at, exploring an elegant integration of these
two distinct paradigms, i.e. CNNs and Transformer, has not
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18527
(a) RGB (b) Pure CNNs (d) CompletionFormer (c) Pure Transformer
Figure 1. Comparison of attention maps of pure CNNs, Vision Transformer, and the proposed CompletionFormer with joint CNNs
and Transformer structure. The pixel highlighted with a yellow cross in RGB image (a) is the one we want to observe how the network
predicts it. Pure CNNs architecture (b) activates discriminative local regions ( i.e., the region on the fire extinguisher), whereas pure
Transformer based models (c) activate globally yet fail on local details. In contrast, our full CompletionFormer (d) can retain both the local
details and global context.
been studied for depth completion yet.
In this work, we propose CompletionFormer, a pyrami-
dal architecture coupling CNN-based local features with
Transformer-based global representations for enhanced
depth completion. Generally, there are two gaps we are
facing: 1) the content gap between RGB and depth in-
put; 2) the semantic gap between convolution and Trans-
former. As for the multimodal input, we propose embed-
ding the RGB and depth information at the early network
stage. Thus our CompletionFormer can be implemented in
an efficient single-branch architecture as shown in Fig. 2
and multimodal information can be aggregated throughout
the whole network. Considering the integration of convolu-
tion and Transformer, previous work has explored from sev-
eral different perspectives [ 6,12,25,28,43] on image classi-
fication and object detection. Although state-of-the-art per-
formance has been achieved on those tasks, high computa-
tion cost [ 12] or inferior performance [ 6,12] are observed
when these networks are directly adapted to depth com-
pletion task. To promise the combination of self-attention
and convolution still being efficient, and also effective, we
embrace convolutional attention and Transformer into one
block and use it as the basic unit to construct our network
in a multi-scale style. Specifically, the Transformer layer is
inspired by Pyramid Vision Transformer [ 39], which adopts
spatial-reduction attention to make the Transformer layer
much more lightweight. As for the convolution-related part,
the common option is to use plain convolutions such as In-
verted Residual Block [ 32]. However, the huge semantic
gap between convolution and the Transformer and the lost
local details by Transformer require the convolutional lay-
ers to increase its own capacity to compensate for it. Fol-
lowing this rationale, we further introduce spatial and chan-
nel attention [ 40] to enhance convolutions. As a result,
without any extra module to bridge the content and semantic
gaps [ 12,28,31], every convolution and Transformer layer
in the proposed block can access the local and global fea-
tures. Hence, information exchange and fusion happen ef-
fectively at every block of our network.
To summarize, our main contributions are as follows:
• We propose integrating Vision Transformer with con-volutional attention layers into one block for depth
completion, enabling the network to possess both local
and global receptive fields for multi-modal information
interaction and fusion. In particular, spatial and chan-
nel attention are introduced to increase the capacity of
convolutional layers.
• Taking the proposed J oint C onvolutional A ttention and
Transformer (JCAT) block as the basic unit, we intro-
duce a single-branch network structure, i.e. Comple-
tionFormer. This elegant design leads to a compara-
ble computation cost to current CNN-based methods
while presenting significantly higher efficiency when
compared with pure Transformer based methods.
• Our CompletionFormer yields substantial improve-
ments to depth completion compared to state-of-the-art
methods, especially when the provided depth is very
sparse, as often occurs in practical applications.
|
Yu_How_To_Prevent_the_Continuous_Damage_of_Noises_To_Model_CVPR_2023 | Abstract
Deep learning with noisy labels is challenging and in-
evitable in many circumstances. Existing methods reduce
the impact of mislabeled samples by reducing loss weights
or screening, which highly rely on the model’s superior
discriminative power for identifying mislabeled samples.
However, in the training stage, the trainee model is im-
perfect and will wrongly predict some mislabeled samples,
which cause continuous damage to the model training. Con-
sequently, there is a large performance gap between exist-
ing anti-noise models trained with noisy samples and mod-
els trained with clean samples. In this paper, we put forward
a Gradient Switching Strategy (GSS) to prevent the contin-
uous damage of mislabeled samples to the classifier. Theo-
retical analysis shows that the damage comes from the mis-
leading gradient direction computed from the mislabeled
samples. The trainee model will deviate from the correct
optimization direction under the influence of the accumu-
lated misleading gradient of mislabeled samples. To ad-
dress this problem, the proposed GSS alleviates the damage
by switching the gradient direction of each sample based on
the gradient direction pool, which contains all-class gradi-
ent directions with different probabilities. During training,
each gradient direction pool is updated iteratively, which
assigns higher probabilities to potential principal direc-
tions for high-confidence samples. Conversely, uncertain
samples are forced to explore in different directions rather
than mislead model in a fixed direction. Extensive experi-
ments show that GSS can achieve comparable performance
with a model trained with clean data. Moreover, the pro-
posed GSS is pluggable for existing frameworks. This idea
of switching gradient directions provides a new perspective
for future noisy-label learning.
| 1. Introduction
Recently, Deep Neural Networks (DNNs) have achieved
breakthrough results across various computer vision
tasks [9, 14–16, 20, 34, 36, 49, 50]. The high performance
*Li Sun is the corresponding author.
Figure 1. Performance comparison of existing methods and the
proposed GSS on CIFAR-10 with 40% noisy labels. The red
dashed line denotes the upper limit, which is the accuracy of mod-
els trained with completely clean labels.
of DNNs requires a large amount of labeled data, but it is
hard to guarantee label quality in many circumstances. As a
matter of fact, many benchmark datasets inevitably contain
noisy labels according to investigation results in [35].
Various types of researches are proposed to address the
noisy-label problem. The mainstream types are robust loss
function [18, 27, 44, 54] and sample screening [30, 37, 40].
These methods deal with mislabeled samples in essen-
tially similar ways, that is, by decreasing the weights of
low-confidence samples, which highly rely on the trainee
model’s discriminative power of identifying mislabeled
samples. However, during training the trainee model is
imperfect and will miss many mislabeled samples, which
will continuously damage the model. That is why there is a
large performance gap between existing anti-noise models
trained with noisy samples and models trained with clean
samples. As shown in Fig. 1, existing anti-noise meth-
ods (denoted by solid lines) have 1.61%∼9.81% accu-
racy gaps compared with the reference upper limit (red dash
line), which denotes the performance of models trained with
clean samples. It raises an important question: how to
prevent the continuous damage of noises to model train-
ing? The theoretical analysis in Section 4 shows that the
noise damage comes from the misleading gradient direc-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12054
tions caused by noises. Therefore, it is a viable solution to
handle the continuous damage of noises to model training
by eliminating the impact of misleading gradient directions.
In this paper, we put forward a Gradient Switching Strat-
egy (GSS) to prevent the continuous damage of mislabeled
samples to the model training. The core idea is assigning
a random gradient direction to cancel out the negative im-
pact of mislabeled samples, especially for uncertain sam-
ples which could continuously generate a misleading gra-
dient in a single direction. For high-confidence samples,
the model will be optimized using their potential principal
directions with a larger probability. As the model’s dis-
criminative power grows over training time, parts of uncer-
tain samples will become high-confidence samples, which
in turn optimizes the model with their potential principal di-
rections. Finally, the model will be well-trained with almost
all samples in the dataset step by step.
Specifically, we devise a gradient direction pool for each
sample, which contains all-class gradient directions with
different probabilities. The probabilities of different gra-
dient directions are determined based on the original noisy
label, predictions, and partial randomness. In the training
stage, for uncertain samples, the probabilities of different
gradient directions are dominated by randomness. The mul-
tiple random gradient directions prevent a fixed misdirec-
tion from continuously damaging the training.
The high-confidence samples consist of two groups: the
predictions are consistent with original labels (consistent
sample), and the predictions are not consistent with original
labels (non-consistent sample). For consistent samples, the
gradient direction of the original label (potential principal
direction) has a higher probability than those for the remain-
ing gradient directions. For non-consistent samples, two
highest probabilities correspond to the gradient directions
of the original label and model prediction. The model ex-
plores two gradient directions and determines the potential
principal direction during training. In summary, the poten-
tial principal directions of high-confidence samples guide
the optimization of the model.
Experiment results demonstrate that the proposed GSS
can effectively prevent the damage of mislabeled samples
to the model training. The proposed GSS is pluggable
for existing frameworks for noisy-label learning, which can
achieve 1.23%∼9.22% accuracy improvement than SOTA
for high noise rates. Additionally, the model with GSS
trained on noisy samples can achieve comparable perfor-
mance with models trained with clean samples.
Overall, our contributions are summarized as follows:
• This paper is the first to clarify the continuous damage
of the mislabeled samples to model training. Theoreti-
cal analysis shows the continuous damage comes from
the misleading gradient direction derived from misla-
beled samples, which provides a new perspective forfuture noisy-label learning research.
• We propose the Gradient Switching Strategy (GSS) to
prevent the continuous gradient damage of mislabeled
samples to the model training. A gradient direction
pool containing gradient directions of all classes with
dynamic probabilities for each sample is devised to al-
leviate the impact of uncertain samples and optimize
the model with the potential principal direction.
• Detailed theoretical analysis and extensive experimen-
tal results show that the proposed GSS can effectively
prevent damage of mislabeled samples. Through com-
bining GSS with existing anti-noise learning methods,
the final classification performance can achieve up to
1.23%∼9.22% accuracy improvement over SOTA
on datasets with severe noise, some of which are even
comparable to the model trained with clean samples.
|
Zhang_3D-Aware_Object_Goal_Navigation_via_Simultaneous_Exploration_and_Identification_CVPR_2023 | Abstract
Object goal navigation (ObjectNav) in unseen environ-
ments is a fundamental task for Embodied AI. Agents in ex-
isting works learn ObjectNav policies based on 2D maps,
scene graphs, or image sequences. Considering this task
happens in 3D space, a 3D-aware agent can advance its
ObjectNav capability via learning from fine-grained spa-
tial information. However, leveraging 3D scene representa-
tion can be prohibitively unpractical for policy learning in
this floor-level task, due to low sample efficiency and expen-
sive computational cost. In this work, we propose a frame-
work for the challenging 3D-aware ObjectNav based on two
straightforward sub-policies. The two sub-polices, namely
corner-guided exploration policy and category-aware iden-
tification policy, simultaneously perform by utilizing on-
line fused 3D points as observation. Through extensive ex-
periments, we show that this framework can dramatically
improve the performance in ObjectNav through learning
from 3D scene representation. Our framework achieves the
best performance among all modular-based methods on the
Matterport3D and Gibson datasets, while requiring (up to
30x) less computational cost for training. The code will be
released to benefit the community.1
| 1. Introduction
As a vital task for intelligent embodied agents, object
goal navigation (ObjectNav) [38, 49] requires an agent to
find an object of a particular category in an unseen and
unmapped scene. Existing works tackle this task through
end-to-end reinforcement learning (RL) [27, 36, 47, 51] or
modular-based methods [9, 14, 35]. End-to-end RL based
methods take as input the image sequences and directly
output low-level navigation actions, achieving competitive
*Joint first authors
†Corresponding author: [email protected]
1Homepage: https://pku-epic.github.io/3D-Aware-ObjectNav/
: sofa
: chair: cushion
Looking for a chair
A
C
B
A
C
B
C
Figure 1. We present a 3D-aware ObjectNav framework along
with simultaneous exploration and identification policies: A→B,
the agent was guided by an exploration policy to look for its target;
B→C, the agent consistently identified a target object and finally
called STOP.
performance while suffering from lower sample efficiency
and poor generalizability across datasets [3, 27]. Therefore,
we favor modular-based methods, which usually contain the
following modules: a semantic scene mapping module that
aggregates the RGBD observations and the outputs from
semantic segmentation networks to form a semantic scene
map; an RL-based goal policy module that takes as input
the semantic scene map and learns to online update a goal
location; finally, a local path planning module that drives
the agent to that goal. Under this design, the semantic ac-
curacy and geometric structure of the scene map are crucial
to the success of object goal navigation.
We observe that the existing modular-based methods
mainly construct 2D maps [8, 9], scene graphs [34, 56] or
neural fields [43] as their scene maps. Given that objects
lie in 3D space, these scene maps are inevitably deficient in
leveraging 3D spatial information of the environment com-
prehensively and thus have been a bottleneck for further
improving object goal navigation. In contrast, forming a
3D scene representation naturally offers more accurate, spa-
tially dense and consistent semantic predictions than its 2D
counterpart, as proved by [12, 31, 45]. Hence, if the agent
could take advantage of the 3D scene understanding and
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6672
form a 3D semantic scene map, it is expected to advance
the performance of ObjectNav.
However, leveraging 3D scene representation would
bring great challenges to ObjectNav policy learning. First,
building and querying fine-grained 3D representation across
a floor-level scene requires extensive computational cost,
which can significantly slow down the training of RL [7,55].
Also, 3D scene representation induces considerably more
complex and high-dimensional observations to the goal
policy than its 2D counterpart, leading to a lower sam-
ple efficiency and hampering the navigation policy learn-
ing [22, 57]. As a result, it is demanding to design a frame-
work to efficiently and effectively leverage powerful 3D in-
formation for ObjectNav.
To tackle these challenges, we propose a novel frame-
work composed of an online semantic point fusion module
for 3D semantic scene mapping and two parallel policy net-
works in charge of scene exploration and object identifica-
tion, along with a local path planning module. Our online
semantic point fusion module extends a highly efficient on-
line point construction algorithm [53] to enable online se-
mantic fusion and spatial semantic consistency computation
from captured RGBD sequences. This 3D scene construc-
tion empowers a comprehensive 3D scene understanding
for ObjectNav. Moreover, compared to dense voxel-based
methods [7, 55], our point-based fusion algorithm are more
memory-efficient [40,46] which makes it practically usable
for floor-level navigation task. (See Figure 1)
Moreover, to ease the learning of navigation policy, we
further propose to factorize the navigation policy into two
sub-policies, namely exploration and identification. The
two policies simultaneously perform to roll out an explo-
ration goal and an identified object goal (if exist), respec-
tively. Then the input for the local path planning module
will switch between these two goals, depending on whether
there exists an identified target object. More specifically, we
propose a corner-guided exploration policy which learns to
predict a long-term discrete goal at one of the four corners
of the bounding box of the scene. These corner goals ef-
ficiently drive the agent to perceive the surroundings and
explore regions where the target object is possibly settled.
And for identification, a category-aware identification pol-
icy is proposed to dynamically learn a discrete confidence
threshold to identify the semantic predictions for each cat-
egory. Both of these policies are trained by RL in low-
dimensional discrete action space. Through experiments,
the simultaneous two-policy mechanism and discrete action
space design dramatically reduce the difficulty in learning
for 3D-aware ObjectNav and achieve better performance
than existing modular-based navigation strategies [26, 35].
Through extensive evaluation on the public benchmarks,
we demonstrate that our method performs online 3D-awareObjectNav at 15 FPS while achieving the state-of-the-
art performance on navigation efficiency. Moreover, our
method outperforms all other modular-based methods in
both efficiency and success rate with up to 30x times less
computational cost.
Our main contributions include:
• We present the first 3D-aware framework for Object-
Nav task.
• We build an online point-based construction and fusion
algorithm for efficient and comprehensive understand-
ing of floor-level 3D scene representation.
• We propose a simultaneous two-policy mechanism
which mitigates the problem of low sample efficiency
in 3D-aware ObjectNav policy learning.
|
Yu_V2X-Seq_A_Large-Scale_Sequential_Dataset_for_Vehicle-Infrastructure_Cooperative_Perception_and_CVPR_2023 | Abstract
Utilizing infrastructure and vehicle-side information
to track and forecast the behaviors of surrounding traf-
fic participants can significantly improve decision-making
and safety in autonomous driving. However, the lack
of real-world sequential datasets limits research in this
area. To address this issue, we introduce V2X-Seq, the
first large-scale sequential V2X dataset, which includes
data frames, trajectories, vector maps, and traffic lights
captured from natural scenery. V2X-Seq comprises two
parts: the sequential perception dataset, which includes
more than 15,000 frames captured from 95 scenarios, and
the trajectory forecasting dataset, which contains about
80,000 infrastructure-view scenarios, 80,000 vehicle-view
scenarios, and 50,000 cooperative-view scenarios cap-
tured from 28 intersections’ areas, covering 672 hours of
data. Based on V2X-Seq, we introduce three new tasks for
vehicle-infrastructure cooperative (VIC) autonomous driv-
ing: VIC3D Tracking, Online-VIC Forecasting, and Offline-
VIC Forecasting. We also provide benchmarks for the intro-
duced tasks. Find data, code, and more up-to-date informa-
tion at https://github.com/AIR-THU/DAIR-V2X-Seq.
| 1. Introduction
Although single-vehicle autonomous driving has made
significant advancements in recent years, it still faces sig-
nificant safety challenges due to its limited perceptual field
and inability to accurately forecast the behaviors of traffic
participants. These challenges hinder autonomous vehicles
from making well-informed decisions and driving safer. A
promising solution to address these challenges is to leverage
infrastructure information via Vehicle-to-Everything (V2X)
*Corresponding author. Work done while at AIR. For any questions or
discussions, please email [email protected].
1021010WiWhRXWTUajecWRU\WiWhLaUge-VcaOeTUajecWRU\V2X VieZ
SiQgOe-ageQW VieZDAIR-V2X-COPV2VV2X-SeT
AUgRYeUVe nXScenceVKITTIWa\mRV2X-Sim 103Sim.Real
ASRllRScaSeHighDNGSIMWIBAM10101RRSe3DFigure 1. Autonomous driving datasets. V2X-Seq is the first large-
scale, real-world, and sequential V2X dataset. The green circle de-
notes the real-world dataset, and the pink triangle denotes the sim-
ulated dataset. The abscissa represents the number of sequences.
communication, which has been shown to significantly ex-
pand perception range and enhance autonomous driving
safety [1, 38]. However, current research primarily focuses
on utilizing infrastructure data to improve the perception
ability of autonomous driving, particularly in the context
of frame-by-frame 3D detection. To enable well-informed
decision-making for autonomous vehicles, it is critical to
also incorporate infrastructure data to track and predict the
behavior of surrounding traffic participants.
To accelerate the research on cooperative sequential per-
ception and forecasting, we release a large-scale sequential
V2X dataset, V2X-Seq. All elements of this dataset were
captured and generated from real-world scenarios. Com-
pared with DAIR-V2X [38], which focuses on 3D object
detection tasks, V2X-Seq is specifically designed for track-
ing and trajectory forecasting tasks. The V2X-Seq dataset
is divided into two parts: the sequential perception dataset
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5486
Table 1. Comparison with the public autonomous driving dataset. ’-’ denotes that the information is not provided. ’Real/Sim.’ indi-
cates whether the data was collected from the real world or a simulator. V2X view includes multi-vehicle cooperative view and vehicle-
infrastructure cooperative view. V2X-Seq is the first large-scale sequential V2X dataset and focuses on vehicle-infrastructure cooperative
view. All data elements, including the traffic light signals, are captured and generated from the real world.
Dataset Year Real/Sim. ViewWith With With With Tracked TotalScenesTrajectory 3D Boxes Maps Traffic Light Objects/Scene Time (hour)
KITTI [12] 2012 Real Single-vehicle ! ! # # 43.67 1.5 50
nuScenes [3] 2019 Real Single-vehicle ! ! ! # 75.75 5.5 1,000
Waymo Motion [10, 28] 2021 Real Single-vehicle ! ! ! ! - 574 103,354
Argoverse [5] 2019 Real Single-vehicle ! # ! # 50.03 320 324,557
ApolloScape [16, 25] 2019 Real Single-vehicle ! # # # 50.6 2.5 103
HighD [17] 2018 Real Drone ! # ! # - 16.5 5,940
WIBAM [14] 2021 Real Infrastructure # # # # 0 0.25 0
NGSIM [29] 2016 Sim. Infrastructure ! # # # - 1.5 540
V2X-Sim 2.0 [21] 2022 Sim. V2X ! ! # # - 0.3 100
OPV2V [35] 2021 Sim. V2X ! ! # # 26.5 0.2 73
Cooper(inf) [1] 2019 Sim. V2X ! ! # # 30 - <100
DAIR-V2X-C [38] 2021 Real V2X # ! ! # 0 0.5 100
V2X-Seq/Perception 2023 Real V2X ! ! ! # 110 0..43 95
V2X-Seq/Forecasting 2023 Real V2X ! ! ! ! 101 583 210,000
and the trajectory forecasting dataset. The sequential per-
ception dataset comprises 15,000 frames captured from 95
scenarios, which include infrastructure images, infrastruc-
ture point clouds, vehicle-side images, vehicle-side point
clouds, 3D detection/tracking annotations, and vector maps.
The trajectory forecasting dataset comprises 210,000 sce-
narios, including 50,000 cooperative-view scenarios, that
were mined from 672 hours of data collected from 28 in-
tersection areas. To our knowledge, V2X-Seq is the first
sequential V2X dataset that includes such a large-scale sce-
narios, making it an ideal resource for developing and test-
ing cooperative perception and forecasting algorithms.
Based on the V2X-Seq dataset, we introduce three novel
tasks for vehicle-infrastructure cooperative perception and
forecasting. The first task is VIC3D Tracking, which aims
to cooperatively locate, identify, and track 3D objects us-
ing sequential sensor inputs from both the vehicle and in-
frastructure. The second task is Online-VIC trajectory fore-
casting, which focuses on accurately predicting future be-
havior of target agents by utilizing past infrastructure tra-
jectories, ego-vehicle trajectories, real-time traffic lights,
and vector maps. The third task is Offline-VIC trajectory
forecasting, which involves extracting relevant knowledge
from previously collected infrastructure data to facilitate
vehicle-side forecasting. These proposed tasks are accom-
panied by rich benchmarks. Additionally, we propose an
intermediate-level framework, FF-Tracking, to effectively
solve the VIC3D Tracking task.
The main contributions are organized as follows:
• We release the V2X-Seq dataset, which constitutes the
first large-scale sequential V2X dataset. All data are cap-
tured and generated from the real world.
• Based on the V2X-Seq dataset, we introduce three tasksfor the vehicle-infrastructure cooperative autonomous
driving community. To enable a fair evaluation of these
tasks, we have carefully designed a set of benchmarks.
• We propose a middle fusion method, named FF-Tracking,
for solving VIC3D Tracking and our proposed method
can efficiently overcome the latency challenge.
|
Zhang_Transferable_Adversarial_Attacks_on_Vision_Transformers_With_Token_Gradient_Regularization_CVPR_2023 | Abstract
Vision transformers (ViTs) have been successfully de-
ployed in a variety of computer vision tasks, but they are still
vulnerable to adversarial samples. Transfer-based attacks
use a local model to generate adversarial samples and di-
rectly transfer them to attack a target black-box model. The
high efficiency of transfer-based attacks makes it a severe
security threat to ViT-based applications. Therefore, it is vi-
tal to design effective transfer-based attacks to identify the
deficiencies of ViTs beforehand in security-sensitive scenar-
ios. Existing efforts generally focus on regularizing the in-
put gradients to stabilize the updated direction of adversar-
ial samples. However, the variance of the back-propagated
gradients in intermediate blocks of ViTs may still be large,
which may make the generated adversarial samples focus
on some model-specific features and get stuck in poor lo-
cal optima. To overcome the shortcomings of existing ap-
proaches, we propose the Token Gradient Regularization
(TGR) method. According to the structural characteristics
of ViTs, TGR reduces the variance of the back-propagated
gradient in each internal block of ViTs in a token-wise man-
ner and utilizes the regularized gradient to generate adver-
sarial samples. Extensive experiments on attacking both
ViTs and CNNs confirm the superiority of our approach.
Notably, compared to the state-of-the-art transfer-based at-
tacks, our TGR offers a performance improvement of 8.8%
on average.
| 1. Introduction
Transformers have been widely deployed in the natu-
ral language processing, achieving state-of-the-art perfor-
mance. Vision transformer (ViT) [5] first adapts the trans-
former structure to the computer vision, and manifests ex-
cellent performance. Afterward, diverse variants of ViTs
have been proposed to further improve its performance
*Corresponding author.
Figure 1. Illustration of our Token Gradient Regularization (TGR)
method. The red-colored entry represents the back-propagated
gradient with extreme values. The back-propagated gradients cor-
responding to one token in the internal blocks of ViTs are called
the token gradients. Since we regularize the back-propagated
gradients in a token-wise manner, we eliminate the token gradi-
ents (marked with crosses) where extreme gradients locate during
back-propagation to reduce the gradient variance. We then use the
regularized gradients to generate adversarial samples.
[2, 26] and broaden its application to different computer vi-
sion tasks [42, 43], which makes ViTs a well-recognized
successor for convolutional neural networks (CNNs). Un-
fortunately, recent studies have shown that ViTs are still
vulnerable to adversarial attacks [1, 22], which add human-
imperceptible noise to a clean image to mislead deep learn-
ing models. It is thus of great importance to understand
DNNs [13, 27, 28, 35] and devise effective attacking meth-
ods to identify their deficiencies before deploying them in
safety-critical applications [16, 17, 37].
Adversarial attacks can be generally partitioned into two
categories. The first category is the white-box attack, where
attackers can obtain the structures and weights of the tar-
get models for generating adversarial samples. The second
one is the black-box attack, where attackers cannot fetch the
information of the target model. Among different black-
box attacks, transfer-based methods employ white-box at-
tacks to attack a local source model and directly transfer
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16415
the generated adversarial sample to attack the target black-
box model. Due to their high efficiency and applicability,
transfer-based attacks pose a serious threat to the security of
ViT-based applications in practice. Therefore, in this work,
we focus on transfer-based attacks on ViTs.
There are generally two branches of transfer-based at-
tacks in the literature [15]. The first one is based on input
transformation, which aims to combine the input gradients
of multiple transformed images to generate transferable per-
turbations. Complementary to such methods, the second
branch is based on gradient regularization, which modifies
the back-propagated gradient to stabilize the update direc-
tion of adversarial samples and escape from poor local op-
tima. For example, Variance Tuning Method (VMI) [29]
tunes the input gradient to reduce the variance of input gra-
dients. However, the variance of the back-propagated gra-
dients in intermediate blocks of ViTs may still be large,
which may make the generated adversarial samples focus
on some model-specific features with extreme gradient val-
ues. As a result, the generated adversarial samples may still
get stuck in poor local optima and possess limited transfer-
ability across different models.
To address the weaknesses of existing gradient
regularization-based approaches, we propose the Token
Gradient Regularization (TGR) method for transferable ad-
versarial attacks on ViTs. According to the architecture of
ViTs, TGR reduces the variance of the back-propagated gra-
dient in each internal block of ViTs and utilizes the regular-
ized gradient to generate adversarial samples.
More specifically, ViTs crop one image into small
patches and treat these patches as a sequence of input tokens
to fit the architecture of the transformer. The output tokens
of internal blocks in ViTs correspond to the extracted inter-
mediate features. Therefore, we view token representations
as basic feature units in ViTs. We then examine the back-
propagated gradients of the classification loss with respect
to token representations in each internal block of ViTs,
which we call token gradient in this work. As illustrated
in Figure 1, we directly eliminate the back-propagated gra-
dients with extreme values in a token-wise manner until ob-
taining the regularized input gradients, which we used to
update the adversarial samples. Consequently, we can re-
duce the variance of the back-propagated gradients in in-
termediate blocks of ViTs and produce more transferable
adversarial perturbations.
We conducted extensive experiments on the ImageNet
dataset to validate the effectiveness of our proposed attack
method. We examined the transferability of our generated
adversarial samples to different ViTs and CNNs. Notably,
compared with the state-of-the-art benchmarks, our pro-
posed TGR shows a significant performance improvement
of 8.8% on average.
We summarize the contributions of this work as below:• We propose the Token Gradient Regularization (TGR)
method for transferable adversarial attacks on ViTs.
According to the architectures of ViTs, TGR regu-
larizes the back-propagated gradient in each internal
block of ViTs in a token-wise manner and utilizes the
regularized gradient to generate adversarial samples.
• We conducted extensive experiments to validate the ef-
fectiveness of our approach. Experimental results con-
firm that, on average, our approach can outperform
the state-of-the-art attacking method with a significant
margin of 8.8% on attacking ViT models and 6.2% on
attacking CNN models.
• We showed that our method can be combined with
other compatible attack algorithms to further enhance
the transferability of the generated adversarial sam-
ples. Our method can also be extended to use CNNs
as the local source models.
|
Zeng_3D-Aware_Facial_Landmark_Detection_via_Multi-View_Consistent_Training_on_Synthetic_CVPR_2023 | Abstract
Accurate facial landmark detection on wild images plays
an essential role in human-computer interaction, enter-
tainment, and medical applications. Existing approaches
have limitations in enforcing 3D consistency while detect-
ing 3D/2D facial landmarks due to the lack of multi-view
in-the-wild training data. Fortunately, with the recent ad-
vances in generative visual models and neural rendering,
we have witnessed rapid progress towards high quality 3D
image synthesis. In this work, we leverage such approaches
to construct a synthetic dataset and propose a novel multi-
view consistent learning strategy to improve 3D facial land-
mark detection accuracy on in-the-wild images. The pro-
posed 3D-aware module can be plugged into any learning-
based landmark detection algorithm to enhance its accu-
racy. We demonstrate the superiority of the proposed plug-
*This work was done when Libing Zeng and Wentao Bao were interns
at OPPO US Research Center, InnoPeak Technology, Inc.in module with extensive comparison against state-of-the-
art methods on several real and synthetic datasets.
| 1. Introduction
Accurate and precise facial landmark plays a signif-
icant role in computer vision and graphics applications,
such as face morphing [54], facial reenactment [58], 3D
face reconstruction [17, 18, 30], head pose estimation [38],
face recognition [1, 10, 13, 19, 32, 41, 71], and face genera-
tion [11, 21, 60, 69]. In these applications, facial landmark
detection provides great sparse representation to ease the
burden of network convergence in different training stages
and is often used as performance evaluation metric. For in-
stance, as a facial prior, it provides good initialization for
subsequent training [66, 67, 69, 76], good intermediate rep-
resentation to bridge the gap between different modalities
for content generation [11,27,51,79], loss terms which reg-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12747
ularize the facial expression [11, 52], or evaluation metrics
to measure the facial motion quality [53, 73, 78].
The aforementioned applications require the estimated
facial landmarks to be accurate even with significantly var-
ied facial appearance under different identities, facial ex-
pressions, and extreme head poses. Tremendous efforts
have been devoted to address this problem [15, 22–24, 29,
34,40,56,63,74,75,77,82,84]. These approaches often rely
on manually annotated large-scale lab-controlled or in-the-
wild image datasets [4,34] to handle various factors such as
arbitrary facial expressions, head poses, illumination, facial
occlusions, etc.
However, even with the high cost of human labeling,
consistent andaccurate manual annotation of landmarks re-
mains challenging [22, 23, 34]. It is very difficult, if not im-
possible, to force a person to annotate the facial landmark
keypoints at the same pixel locations for faces of different
poses, let alone different annotators under different labeling
environments. Such annotation inconsistency and inaccu-
racy in training images are often the killing factor to learn
an accurate landmark localization model. This is particu-
larly a major problem in non-frontal faces where annotation
becomes extremely challenging. As shown in Fig. 1(a) a
small annotation variation in view #1, results in a signifi-
cant inaccuracy in view #2. This multi-view inconsistency
and inaccuracy can ultimately lead to poor landmark de-
tection accuracy, especially for facial images with extreme
head pose.
To mitigate this annotation inconsistency and inaccuracy
issue, we propose to learn facial landmark detection by en-
forcing multi-view consistency during training. Given the
images of the same facial identity captured with different
head poses, instead of detecting facial landmark at each sep-
arate facial image, we propose a multi-view consistency su-
pervision to locate facial landmark in a holistic 3D-aware
manner. To enforce multi-view consistency, we introduce
self-projection consistency loss and multi-view landmark
loss in training. We also propose an annotation genera-
tion procedure to exploit the merits of lab-controlled data
(e.g., multi-view images, consistent annotations) and in-the-
wild data ( e.g., wide range of facial expressions, identities).
Thanks to this synthetic data, our method does not rely on
human annotation to obtain the accurate facial landmark
locations. Therefore, it alleviates the problem of learning
from inaccurate and inconsistent annotations.
We formulate our solution as a plug-in 3D aware module,
which can be incorporated into any facial landmark detec-
tor and can boost a pre-trained model with higher accuracy
and multi-view consistency. We demonstrate the effective-
ness of our approach through extensive experiments on both
synthetic and real datasets. The main contributions of our
work are as follows:
• We show, for the first time, how to combine the meritsof lab captured face image data ( e.g., multi-view) and
the in-the-wild face image datasets ( e.g., appearance
diversity). Using our proposed approach we produce
a large-scale synthetic, but realistic, multi-view face
dataset, titled DAD-3DHeads-Syn.
• We propose a novel 3D-aware optimization module,
which can be plugged into any learning-based facial
landmark detection methods. By refining an existing
landmark detection algorithm using our optimization
module, we are able to improve its accuracy and multi-
view consistency.
• We demonstrate the performance improvements of our
module built on top multiple baseline methods on sim-
ulated dataset, lab-captured datasets, and in-the-wild
datasets.
|
Zhang_Boosting_Verified_Training_for_Robust_Image_Classifications_via_Abstraction_CVPR_2023 | Abstraction
Zhaodi Zhang1,2,4, Zhiyi Xue2, Yang Chen2, Si Liu3, Yueling Zhang2, Jing Liu1, Min Zhang2
1Shanghai Key Laboratory of Trustworthy Computing
2East China Normal University
3ETH Z ¨urich4Chengdu Education Research Institute
Abstract
This paper proposes a novel, abstraction-based, certified
training method for robust image classifiers. Via abstrac-
tion, all perturbed images are mapped into intervals before
feeding into neural networks for training. By training on
intervals, all the perturbed images that are mapped to the
same interval are classified as the same label, rendering the
variance of training sets to be small and the loss landscape
of the models to be smooth. Consequently, our approach
significantly improves the robustness of trained models. For
the abstraction, our training method also enables a sound
and complete black-box verification approach, which is or-
thogonal and scalable to arbitrary types of neural networks
regardless of their sizes and architectures. We evaluate our
method on a wide range of benchmarks in different scales.
The experimental results show that our method outperforms
state of the art by (i) reducing the verified errors of trained
models up to 95.64%; (ii) totally achieving up to 602.50x
speedup; and (iii) scaling up to larger models with up to
138 million trainable parameters. The demo is available at
https://github.com/zhangzhaodi233/ABSCERT.git.
| 1. Introduction
The robustness of image classifications based on neu-
ral networks is attracting more attention than ever due to
their applications to safety-critical domains such as self-
driving [51] and medical diagnosis [43]. There has been
a considerable amount of work on training robust neural
networks against adversarial perturbations [1, 6–8, 10, 12,
28,32,56–59,62]. Conventional defending approaches aug-
ment the training set with adversarial examples [5, 15, 27,
32,37,41,54,55]. They target only specific adversaries, de-
pending on how the adversarial samples are generated [7],
but cannot provide robustness guarantees [2, 11, 26].
Recent approaches attempt to train certifiably robust
models with guarantees [2, 11, 14, 24, 26, 29, 49, 63]. They
rely on the robustness verification results of neural networks
in the training process. Most of the existing verification ap-
proaches are based on symbolic interval propagation (SIP)[11,14,14,24,26], by which intervals are symbolically input
into neural networks and propagated on a layer basis. There
are, however, mainly three obstacles for these approaches to
be widely adopted: (i) adding the verification results to the
loss function for training brings limited improvement to the
robustness of neural networks due to overestimation intro-
duced in the verification phase; (ii) they are time-consuming
due to the high complexity of the verification problem per
se, e.g., NP-complete for the simplest ReLU-based fully
connected feedforward neural networks [19]; and (iii) the
verification is tied to specific types of neural networks in
terms of network architectures and activation functions.
To overcome the above obstacles, we propose a novel,
abstraction-based approach for training verified robust im-
age classifications whose inputs are numerical intervals .
Regarding (i), we abstract each pixel of an image into an
interval before inputting it into the neural network. The in-
terval is numerically input to the neural network by assign-
ing to two input neurons its lower and upper bounds, re-
spectively. This guarantees that all the perturbations to the
pixel in this interval do not alter the classification results,
thereby improving the robustness of the network. More-
over, this imposes no overestimation in the training phase.
To address the challenge (ii), we use forward propagation
and back propagation only to train the network without ex-
tra time overhead. Regarding (iii), we treat the neural net-
works as black boxes since we deal only with the input layer
and do not change other layers. Hence, being agnostic to the
actual neural network architectures, our approach can scale
up to fairly large neural networks. Additionally, we iden-
tify a crucial hyper-parameter, namely abstraction granu-
larity , which corresponds to the size of intervals used for
training the networks. We propose a gradient descent-based
algorithm to refine the abstraction granularity for training a
more robust neural network |
Zang_Discovering_the_Real_Association_Multimodal_Causal_Reasoning_in_Video_Question_CVPR_2023 | Abstract
Video Question Answering (VideoQA) is challenging as
it requires capturing accurate correlations between modal-
ities from redundant information. Recent methods focus
on the explicit challenges of the task, e.g. multimodal
feature extraction, video-text alignment and fusion. Their
frameworks reason the answer relying on statistical evi-
dence causes, which ignores potential bias in the multi-
modal data. In our work, we investigate relational structure
from a causal representation perspective on multimodal
data and propose a novel inference framework. For vi-
sual data, question-irrelevant objects may establish simple
matching associations with the answer. For textual data,
the model prefers the local phrase semantics which may de-
viate from the global semantics in long sentences. There-
fore, to enhance the generalization of the model, we dis-
cover the real association by explicitly capturing visual
features that are causally related to the question seman-
tics and weakening the impact of local language seman-
tics on question answering. The experimental results on
two large causal VideoQA datasets verify that our pro-
posed framework 1) improves the accuracy of the existing
VideoQA backbone, 2) demonstrates robustness on com-
plex scenes and questions. The code will be released at
https://github.com/Chuanqi-Zang/Discovering-
the-Real-Association .
| 1. Introduction
Video Question Answering (VideoQA) aims to un-
derstand visual information and describe it in language
question-answer format, which is a natural cognitive capa-
bility for humans. Computer Vision (CV) and Natural Lan-
guage Processing (NLP) as the base models for VideoQA
have shown significant progress due to the successful appli-
cation of deep learning, such as action classification [25],
object detection [10], instance segmentation [31], and large-
scale pre-trained language model [6,19]. These tremendous
*Wei Liang is the corresponding author.
Question: What will happen if the girl sprains?
Answer: The girl will stop.
Reason:
A.There are a lot people here , and can find someone to help at any time.
B.The girl can’t exercise because of a sprain and needs to rest .Question: What will happen if the power is cut off?
Answer:
A.[person_1] will stop singing karaoke.
B.[person_1] and [person_2] both cannot work.
TV , indoor singing karaoke
Someone, help Reason
Figure 1. Two samples in VideoQA dataset. They exhibit spurious
reasoning processes of B2A [26] that rely on statistical patterns,
including visually spurious object-relationship associations (top)
and textually unilateral semantic representations (bottom).
advances in basic applications fuel the confidence in fine-
grained multimodal analysis and reasoning, not only fea-
ture extraction, but also fine-grained general causality esti-
mation, which is critical for a robust cognitive system.
Recent VideoQA methods usually explore multimodal
relational knowledge by sophisticated structured architec-
ture, such as memory-augmented model [9], hierarchical
model [20], topological model [18], and transformer-based
model [37]. Although experiments validate their feature
fusion capabilities, we find that these methods concentrate
on statistical association based on multimodal data, ignor-
ing the real stable association. They usually use a gener-
ally constrained approach with Empirical Risk Minimiza-
tion (ERM), which tends to over-rely on the co-occurrence
bias of repeated objects and words in the collected observa-
tional data, and bypasses the impact of complete semantics
at the sentence level. This mechanism reduces the robust-
ness of the model on new data, even in the test set which
has similar distributions to the training set.
For example, as shown in Fig. 1 (top), two people are
playing ”cricket” indoors, and there are other objects in the
room, including a TV that is on. If relying on the statisti-
cal relationship, the model may be confused by the two im-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19027
portant visual factors of ”indoor” and ”television”, and mis-
judge the concerned event, that is, ”sing karaoke”. Based on
the clues provided by the question, the concerned event is
related to the action that is taking place, e.g. ”two men play-
ing cricket”. This requires the model to accurately judge
the objects involved in the event and infer the answer. In
Fig. 1 (bottom), we show the predicted deviation caused by
statistical relationships in the text. When inferring the rea-
sons for answer selection, existing models intensively rely
on correlations between local words and video content, e.g.
”someone; help”, ignoring unreasonable inferences from
other parts of the sentence, e.g. ”a lot”.
From these observations, we summarize two causal chal-
lenges for the VideoQA task. 1) Irrelevant objects con-
founder. The visual information related to the question is
usually causally related to finite objects in the video. When
other objects or background are considered, they are sub-
ject to data bias and become confounders, misleading the
model to select the negative candidate. 2) Keywords con-
founder. The semantics expressed by textual information is
represented by the overall sentence. Some long sentences
contain partially sensible keywords. When just focusing
on local keywords semantics, the model falls into spurious
causal inferences, reducing the robustness of the model.
To address the above causal challenges and improve the
robustness of the model, we propose a Multimodal Causal
Reasoning (MCR) framework for video-text data. In this
framework, causal features and confounding features are
decoupled separately in visual and textual modes through
two training strategies. To explicitly decouple the influence
of different objects and scenes, MCR extracts fine-gained
object-level appearance features and motion dynamics by
spatial Region of Interest (ROI) Align [13] on global vi-
sual features. Among them, causally related objects are se-
lected based on the correlation between object features and
question semantics. In addition to the visual feature, we
also model object coordinate information, category infor-
mation, and global object interaction information to pro-
vide spatio-temporal relation representations for accurate
causal attribute classification. For textual confounders, we
adopt a strategy to reduce the impact of keywords on causal-
ity. MCR relies on the correlation between word encod-
ing and question-visual co-embedding to select keywords
which have a crucial impact on the prediction results. These
keywords provide negative representations for successive
deductive answers. Therefore, we combine these keywords
with other candidate answers to generate difficult negative
samples to improve the recognition ability of the model.
During training, visual intervention and textual interven-
tion are iteratively optimized. Multimodal causal relation-
ships are gradually established which improves the robust-
ness and reusability of the model.
We summarize our contributions as: (1) We discover twonew types of causal challenges for both visual data and tex-
tual data. (2) We propose an object-level causal relationship
extraction strategy to establish the real association between
objects and language semantics, and a keyword broadcast-
ing strategy to cut off the spurious influence of local textual
information. (3) We achieve state-of-the-art performance
on two latest large causal VideoQA datasets.
|
Yang_Video_Event_Restoration_Based_on_Keyframes_for_Video_Anomaly_Detection_CVPR_2023 | Abstract
Video anomaly detection (VAD) is a significant computer
vision problem. Existing deep neural network (DNN) based
VAD methods mostly follow the route of frame reconstruc-tion or frame prediction. However , the lack of mining andlearning of higher-level visual features and temporal con-text relationships in videos limits the further performance
of these two approaches. Inspired by video codec theory,we introduce a brand-new VAD paradigm to break throughthese limitations: First, we propose a new task of video
event restoration based on keyframes. Encouraging DNN toinfer missing multiple frames based on video keyframes so
as to restore a video event, which can more effectively mo-
tivate DNN to mine and learn potential higher-level visual
features and comprehensive temporal context relationshipsin the video. To this end, we propose a novel U-shaped Swin
Transformer Network with Dual Skip Connections ( USTN-
DSC ) for video event restoration, where a cross-attention
and a temporal upsampling residual skip connection are in-troduced to further assist in restoring complex static and
dynamic motion object features in the video. In addition,
we propose a simple and effective adjacent frame differ-ence loss to constrain the motion consistency of the video
sequence. Extensive experiments on benchmarks demon-
strate that USTN-DSC outperforms most existing methods,validating the effectiveness of our method.
| 1. Introduction
Video anomaly detection (V AD) is a hot but challenging
research topic in the field of computer vision. One of the
most challenging aspects comes from the scarcity of anoma-lous samples, which hinders us from learning anomalousbehavior patterns from the samples. As a result, it is hardfor supervised methods to show their abilities, as unsuper-
vised methods are by far the most widely explored direc-tion [ 1,2,4,8,10,19,25–27,31,33,37,41–43,47–51]. To
†Corresponding authors.IBBB B B B PI IBBB B B B PIN
o
n
e
II IN
o
n
eN
o
n
eN
o
n
eN
o
n
eN
o
n
eN
o
n
e
II IN
o
n
eN
o
n
eN
o
n
eN
o
n
eN
o
n
e
(a) video codec (b) video event restoration
Figure 1. Video codec vs. video event restoration. Compared
to video decoding based on explicit motion information, our pro-
posed video event restoration task encourages DNN to automat-
ically mine and learn the implied spatio-temporal relationships
from several frames with key appearance and temporal transition
information to restore the entire video event.
determine whether an abnormal event occurs under unsu-
pervised, a general approach is to model a regular patternbased on normal samples, leaving samples that are incon-sistent with this pattern as anomalies.
Under the unsupervised framework, reconstruction and
prediction-based approaches are two of the most represen-
tative paradigms [ 10,19] for V AD in the current era of
deep neural networks (DNN). The reconstruction-based ap-
proaches typically use deep autoencoder (DAE) to learn to
reconstruct an input frame and regard a frame with large
reconstruction errors as anomalous. However, the power-ful generalization capabilities of DAE enable well recon-
struction of anomalous as well [ 8]. This is attributed to
the simple task of reconstructing input frames, where DAEonly memorizes low-level details rather than understand-ing high-level semantics [ 17]. The idea of the prediction-
based approach assumes that anomalous events are un-predictable and then builds a model that can predict fu-ture frames according to past frames, and poor predictionsof future frames indicate the presence of anomalies [ 19].
The prediction-based approaches further consider short-term temporal relationships to model appearance and mo-
tion patterns. However, due to the appearance and motion
variations of adjacent frames being minimal, the predic-
tor can predict the future frame better based on the past
few frames, even for anomalous samples. The inherent
lack of capability of these two modeling paradigms in min-
ing higher-level visual features and comprehensive tempo-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14592
ral context relationships limits their further performance im-
provement.
In this paper, we aim to explore better methods for
mining complex spatio-temporal relationships in the video.Inspired by video codec theory [ 15,18], we propose a
brand-new V AD paradigm: video event restoration based on
keyframes for V AD. In the video codec [ 18], three types of
frames are utilized, namely I-frame, P-frame, and B-frame.I-frame contains the complete appearance information ofthe current frame, which is called a keyframe. P-frame con-
tains the motion difference information from the previous
frame, and B-frame contains the motion difference betweenthe previous and next frames. Based on these three types
of frames containing explicit appearance and motion rela-
tive relationships, the video can be decoded. Inspired by
this process, our idea sprang up: It should be also theoreti-cally feasible if we give keyframes that contain only implicit
relative relations between appearance and motion, and thenencourage DNN to explore and mine the potential spatio-
temporal variation relationships therein to infer the missing
multiple frames for video event restoration. To motivate
DNN to explore and learn spatio-temporal evolutionary re-lationships in the video actively, we do not provide frameslike P-frames or B-frames that contain explicit motion in-
formation as input cues. This task is extremely challengingcompared to reconstruction and prediction-based tasks, be-
cause DNN must learn to infer the missing multiple framesbased on keyframes only. This requires a strong regularity
and temporal correlation of the event in the video for a better
restoration. On the contrary, video restoration will be poor
for anomalous events that are irregular and random. Underthis assumption, our proposed keyframes-based video event
restoration task can be exactly applicable to V AD. Fig. 1
compares the video codec and video event restoration task
with an illustration.
To perform this challenging task, we propose a novel
U-shaped Swin Transformer Network with Dual Skip
Connections ( USTN-DSC ) for video event restoration
based on keyframes. USTN-DSC follows the classic U-Net [ 36] architecture design, where its backbone consists of
multiple layers of swin transformer (ST) [ 21] blocks. Fur-
thermore, to cope with the complex motion patterns in the
video so as to better restore the video event, we build dual
skip connections in USTN-DSC. Specifically, we introduce
a cross-attention and a temporal upsampling residual skip
connection to further assist in restoring complex dynamicand static motion object features in the video. In addition, to
ensure that the restored video sequence has the consistencyof temporal variation with the real video sequence, we pro-
pose a simple and effective adjacent frame difference (AFD)
loss. Compared with the commonly used optical flow con-
straint loss [ 19], AFD loss is simpler to be calculated while
having comparable constraint effectiveness.The main contributions are summarized as follows:
• We introduce a brand-new video anomaly detection
paradigm that is to restore the video event based onkeyframes, which can more effectively mine and learn
higher-level visual features and comprehensive tempo-ral context relationships in the video.
• We introduce a novel model called USTN-DSC for
video event restoration, where a cross-attention and atemporal upsampling residual skip connection are in-
troduced to further assist in restoring complex dynamicand static motion object features in the video.
• We propose a simple and effective AFD loss to con-
strain the motion consistency of the video sequence.
• USTN-DSC outperforms most existing methods on
three benchmarks, and extensive ablation experiments
demonstrate the effectiveness of USTN-DSC.
|
Zeng_PEFAT_Boosting_Semi-Supervised_Medical_Image_Classification_via_Pseudo-Loss_Estimation_and_CVPR_2023 | Abstract
Pseudo-labeling approaches have been proven beneficial
for semi-supervised learning (SSL) schemes in computer vi-
sion and medical imaging. Most works are dedicated to
finding samples with high-confidence pseudo-labels from
the perspective of model predicted probability. Whereas
this way may lead to the inclusion of incorrectly pseudo-
labeled data if the threshold is not carefully adjusted. In
addition, low-confidence probability samples are frequently
disregarded and not employed to their full potential. In
this paper, we propose a novel Pseudo-loss Estimation
andFeature Adversarial Training semi-supervised frame-
work, termed as PEFAT, to boost the performance of multi-
class and multi-label medical image classification from the
point of loss distribution modeling and adversarial train-
ing. Specifically, we develop a trustworthy data selec-
tion scheme to split a high-quality pseudo-labeled set, in-
spired by the dividable pseudo-loss assumption that clean
data tend to show lower loss while noise data is the oppo-
site. Instead of directly discarding these samples with low-
quality pseudo-labels, we present a novel regularization
approach to learn discriminate information from them via
injecting adversarial noises at the feature-level to smooth
the decision boundary. Experimental results on three med-
ical and two natural image benchmarks validate that our
PEFAT can achieve a promising performance and surpass
other state-of-the-art methods. The code is available at
https://github.com/maxwell0027/PEFAT.
| 1. Introduction
Deep learning has achieved remarkable success in vari-
ous computer vision tasks [4–6,13,15,19]. This success has
∗Equal contribution.†Y . Xia is the corresponding author. This work
was supported in part by the National Natural Science Foundation of China
under Grants 62171377, and in part by the Key Research and Development
Program of Shaanxi Province, China, under Grant 2022GY-084.
(a) Illustration of traditional SSL methods (top) and our method (bottom).
(b) Probability distribution of labeled data (left) and validation data (right),
when using the warm-upped model on ISIC2018 dataset.
Figure 1. (a) shows the main difference between our method
and other SSL methods, our method selects high-quality pseudo-
labeled data by pseudo-loss estimation, and also injects feature-
level adversarial noises for better unlabeled data mining; (b) in-
dicates the phenomenon that clean pseudo-labeled set is hard to
collect when using probability-based threshold, which is mainly
attributed to the over-confident prediction.
also made practical applications more accessible, including
medical image analysis (MIA) [10,21,30,31,35,39]. How-
ever, unlike computer vision, annotating a large-scale med-
ical image dataset requires expert knowledge and is time-
consuming and costly. Alternatively, unlabeled data can be
collected from clinical sites in a more available way, thereby
mitigating the cost of data annotation by leveraging these
unlabeled data.
Semi-supervised Learning (SSL) has drawn a lot of at-
tention due to its superior performance, by only leveraging
limited labeled data and a vast number of unlabeled data.
Under the SSL setting, it is critical to mine adequate infor-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15671
mation from unlabeled data. In the existing SSL methods,
pseudo-labeling [14,22,29,37,38] and consistency regular-
ization [12, 24, 25] are the mainstream. The former focuses
on finding confident pseudo-labels for re-training, and the
latter aims to improve the robustness of the model by keep-
ing one logical distribution similar to the other.
However, most SSL methods encounter two issues. First,
unreliable pseudo-labels are a problem with the threshold-
selecting data method based on predicted probability as it
often introduces numerous incorrect pseudo-labels due to
confirmation bias. As illustrated in Figure 1b, unlabeled
data with both correct and incorrect pseudo-labels follow
similar probability distributions. Second, informative unla-
beled samples are underutilized as unselected data with low
probabilities typically cluster around the decision boundary.
Recent studies [1, 17] have found that neural networks tend
to fit clean data first and then memorize noise data during
training, resulting in lower loss for clean data and higher
loss for noise data in early stages of training. Furthermore,
some works [9, 25] have investigated the effects of adver-
sarial training under semi-supervised settings, which show
potential for learning from low-quality pseudo-labeled data.
All these findings pave the way towards solving the afore-
mentioned problems.
In this paper, we propose a novel SSL method called
Pseudo-loss Estimation and Feature Adversarial Training
(PEFAT) for multi-class and multi-label medical image
classification. First, we introduce a new estimation scheme
for reliable pseudo-labeled data selection from the perspec-
tive of pseudo-loss distribution. It is motivated by our argu-
ment that there is a dividable loss distribution between cor-
rect and incorrect pseudo-labeled data. Specifically, we first
warm up the model on training data with contrastive learn-
ing, in order to learn unbiased representation. Then we set
up a two-component Gaussian Mixture Model (GMM) [28]
to learn prior loss distribution on labeled data. In the
procedure of pseudo-labeled data selection, we feed cross
pseudo-loss to the fitted GMM and obtain the trustworthy
pseudo-labeled data with posterior probability. Second, we
propose a feature adversarial training (FAT) strategy that in-
jects adversarial noises in the feature-level to smooth the de-
cision boundary, aiming at for further utilizing the rest uns-
elected but informative data. Although FAT is originally de-
signed for the rest data, it can also be applied to the selected
pseudo-labeled data. Based on the technics above, our PE-
FAT successfully boosts the classification performance in
MIA from the point of trustworthy pseudo-labeled data se-
lection and adversarial consistency regularization.
To summarize, our main contributions are three-fold. (1)
Different from previous works that select pseudo-labeled
data with a probability threshold, we present a new se-
lection approach from the perspective of the loss distribu-
tion, which exhibits superior ability in high-quality pseudo-labeled data collection. (2) We propose a new adversarial
regularization strategy to fully leverage the rest unlabeled
but informative data, which benefits the model in decision
boundary smoothing and better representation learning. (3)
Extensive experimental results on three public medical im-
age datasets and two natural image datasets demonstrate the
superiority of the proposed PEFAT, which significantly sur-
passes other advanced SSL methods.
|
Yang_K3DN_Disparity-Aware_Kernel_Estimation_for_Dual-Pixel_Defocus_Deblurring_CVPR_2023 | Abstract
The dual-pixel (DP) sensor captures a two-view image
pair in a single snapshot by splitting each pixel in half.
The disparity occurs in defocus blurred regions between
the two views of the DP pair, while the in-focus sharp
regions have zero disparity. This motivates us to propose a
K3DN framework for DP pair deblurring, and it has three
modules: i) a disparity-aware deblur module. It estimates
a disparity feature map, which is used to query a trainable
kernel set to estimate a blur kernel that best describes
the spatially-varying blur. The kernel is constrained to
be symmetrical per the DP formulation. A simple Fourier
transform is performed for deblurring that follows the blur
model; ii) a reblurring regularization module. It reuses the
blur kernel, performs a simple convolution for reblurring,
and regularizes the estimated kernel and disparity feature
unsupervisedly, in the training stage; iii) a sharp region
preservation module. It identifies in-focus regions that
correspond to areas with zero disparity between DP
images, aims to avoid the introduction of noises during
the deblurring process, and improves image restoration
performance. Experiments on four standard DP datasets
show that the proposed K3DN outperforms state-of-the-art
methods, with fewer parameters and flops at the same time.
| 1. Introduction
Dual-pixel (DP) sensors are widely employed in digital
single-lens reflex cameras (DSLR) and smartphone cam-
eras. Each pixel of a DP sensor is divided into two photodi-
odes to provide two sub-views of a scene, namely, left and
right views, to assist in auto-focusing. Recently, researchers
have used the two-view DP pair to benefit several computer
vision tasks, especially defocus deblurring [ 1,2,24]. Specif-
ically, blurred pixels in the left and right view of the DP pair
exhibit an amount of shift, termed DP disparity, which pro-
vides information for the blur kernel estimation (the key for
defocus deblurring [ 14,16,19]).
†Corresponding author.⇤Equal contribution.010203024252627
KPAC(21)IFAN(21)
DEMENet(19)DPDNet(20)RDPD(21)DDDNet(21)BAMBNet(22)Restormer(22)OursOursDeepRFT(21)DRBNet (22)
Params (M)PSNR (dB)01K2K3K4K5KOursOursDeepRFT(21)DRBNet (22)IFAN(21)KPAC(21)RDPD(21)BAMBNet(22)Restormer(22)
DDDNet(21)DPDNet(20)
DEMENet(19)Flops (G)Figure 1. Comparison of our method and state-of-the-art methods
on the real DPD-blur dataset [ 1]. The Blue andredcycles sep-
arately denote our tiny and lightweight models. We show PSNRs
(dB) with respect to model parameters (M) and Flops (G). The
numbers in the brackets beside methods denote publication years.
Best viewed in color on the screen.
Existing DP-based defocus deblurring methods are gen-
erally divided into two categories: end-to-end-based and
disparity (defocus map)-based methods. End-to-end based
methods [ 1,2,11,17,19,24] directly restore an all-in-focus
image from a defocus blurred DP pair without consider-
ing DP domain knowledge for network design. Therefore, it
is challenging for these methods to tackle spatially-varying
and large blur. Disparity-based methods [ 10,11,15,23] esti-
mate the DP disparity (or defocus map) to aid the restoration
of an all-in-focus image. However, these methods either re-
quire extra data (ground-truth DP disparity as supervision)
[15], neglect DP disparity knowledge when warping [ 10],
heavily depend on the number of network branches [ 11], or
rely on pre-calibrated kernel sets and additional priors [ 23]
for training.
Our method, K3DN, belongs to the disparity-based
group, and it has high performance on real DP datasets (see
Fig.1for an example). K3DN follows the mathematic in-
dications of DP and blur models, while not requiring pre-
calibrated kernel sets and extra data. K3DN consists of
three modules: i) a disparity-aware deblur module; ii) a re-
blurring regularization module; iii) a sharp region preserva-
tion module.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13263
The first component is a disparity-aware deblur module
that estimates spatially-varying kernels for deblurring. We
define a trainable kernel set, and kernels in the set can model
diverse blur patterns. Each kernel satisfies the horizontally
symmetric constraint [ 2,16], following the DP image for-
mulation. Given the input DP image pair, we first estimate
a disparity feature map, and then use the disparity feature
map to query the kernel set to estimate a DP blur kernel that
best describes the spatially-varying blur. With the blur ker-
nel, we simply perform a Fourier transform for deblurring
that follows the blur model.
The second component we developed is a reblurring reg-
ularization module. It reuses the DP blur kernel, takes an
all-in-focus image as input, and performs a simple convolu-
tion to generate a blurred image. By enforcing the similarity
between the blurred image and the input DP image, we reg-
ularize the proposed deblur module.
Our third component is a sharp region preservation mod-
ule. Not all regions of DP pairs are blurred. We observe that
previous works neglect to preserve pixel values or features
from in-focus sharp regions. By mining (based on the DP
model) similarities between features before and after our
deblur module, we preserve features from all-in-focus re-
gions and improve feature quality, leading to better image
restoration performance.
Please note that all intermediate parts ( e.g., the train-
able kernel set and disparity estimator) do not require ex-
tra ground-truth information ( e.g., pre-calibrated kernel sets
and disparity) for training. We only need ground-truth all-
in-focus images for end-to-end training.
Our contributions are summarized below:
•A simple K3DN framework for DP image pair defocus
deblurring;
•A disparity-aware deblur module. It has a trainable
kernel set, and estimates a disparity feature map to
query the set to estimate a DP blur kernel that best de-
scribes the spatial varying blur;
•A reblurring regularization module. It follows the DP
model, reuses the blur kernel from the deblur module,
and regularizes the deblur module;
•A sharp region preservation module. It mines in-focus
regions of DP pairs, improves feature quality from in-
focus regions, resulting in better restoration perfor-
mance.
Experimentally, we validate the proposed K3DN on four
standard benchmark datasets [ 1,2,15,16], showing state-of-
the-art performance in image restoration and model size.
|
Zhang_Text-Visual_Prompting_for_Efficient_2D_Temporal_Video_Grounding_CVPR_2023 | Abstract
In this paper, we study the problem of temporal video
grounding ( TVG ), which aims to predict the starting/ending
time points of moments described by a text sentence within
a long untrimmed video. Benefiting from fine-grained 3D
visual features, the TVG techniques have achieved remark-
able progress in recent years. However, the high complexity
of 3D convolutional neural networks (CNNs) makes extract-
ing dense 3D visual features time-consuming, which calls
for intensive memory and computing resources. Towards
efficient TVG, we propose a novel text-visual prompting
(TVP ) framework, which incorporates optimized perturba-
tion patterns (that we call ‘prompts’) into both visual in-
puts and textual features of a TVG model. In sharp con-
trast to 3D CNNs, we show that TVP allows us to effec-
tively co-train vision encoder and language encoder in a
2D TVG model and improves the performance of cross-
modal feature fusion using only low-complexity sparse 2D
visual features. Further, we propose a Temporal- Distance
IoU (TDIoU ) loss for efficient learning of TVG. Experi-
ments on two benchmark datasets, Charades-STA and Ac-
tivityNet Captions datasets, empirically show that the pro-
posed TVP significantly boosts the performance of 2D TVG
(e.g., 9.79% improvement on Charades-STA and 30.77%
improvement on ActivityNet Captions) and achieves 5×in-
ference acceleration over TVG using 3D visual features.
Codes are available at Open.Intel .
| 1. Introduction
In recent years, we have witnessed great progress on
temporal video grounding ( TVG ) [30, 74]. One key to
this success comes from the fine-grained dense 3D vi-
sual features extracted by 3D convolutional neural networks
(CNNs) ( e.g., C3D [56] and I3D [3]) since TVG tasks de-
mand spatial-temporal context to locate the temporal inter-
val of the moments described by the text query. However,
due to the high cost of the dense 3D feature extraction, most
existing TVG models only take these 3D visual features ex-
Figure 1. The architecture and performance comparison among TVG
methods: a)3D TVG methods [14, 16, 18, 34, 43, 60–62, 64, 67, 69, 71, 73],
b)2D TVG methods [1, 7], and c)TVP-based 2D TVG (Ours), d)over-
all performance comparison. Ours is the most efficient (least inference
time) and achieves competitive performance compared to 3D TVG meth-
ods. In contrast to existing TVG methods, which utilize dense video fea-
tures extracted by non-trainable offline 3D CNNs and textual features, our
proposed framework utilizes a trainable 2D CNN as the vision encoder to
extract features from sparsely-sampled video frames with a universal set of
frame-aware visual prompts and adds text prompts in textual feature space
for end-to-end regression-based modeling.
tracted by offline 3D CNNs as inputs instead of co-training
during TVG model training.
Although models using 3D visual features (that we
call ‘ 3D methods or models ’) outperform these using the
2D features (that we call ‘ 2D methods or models ’), a
unique advantage of 2D methods is that extracting 2D
visual features can significantly reduce the cost in TVG
tasks [14, 15, 30, 34, 61, 62, 69, 74, 75]. An efficient and
lightweight solution with reasonable performance is also
demanded in computer vision, NLP, and video-language
tasks [19, 23, 38, 41, 68, 76–80]. As discussed above, the
methods employing 3D video features are challenging to be
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14794
employed in practical applications. It thus has significant
practical and economic value to develop compact 2D solu-
tions for TVG tasks. In this paper, we ask:
How to advance 2D TVG methods so as to achieve
comparable results to 3D TVG methods?
To address this problem, we propose a novel text-visual
prompting ( TVP ) framework for training TVG models us-
ing 2D visual features. As shown in Fig. 1 , for existing
2D TVG and 3D TVG methods, they all utilize offline pre-
trained vision encoders and language encoders to perform
feature extraction. In contrast, our proposed TVP frame-
work is end-to-end trainable. Furthermore, benefiting from
text-visual prompting and cross-modal pretraining on large-
scale image-text datasets, our proposed framework could
achieve comparable performance to 3D TVG methods with
significant inference time acceleration.
Conventionally, TVG methods consist of three stages:
①extracting feature from visual and text inputs; ②multi-
modal feature fusion; ③cross-modal modelling. In contrast
to conventional methods, TVP incorporates optimized input
perturbation patterns (that we call ‘ prompts ’) into both vi-
sual inputs and textual features of a TVG model. We apply
trainable parameters in the textual features as text prompts
and develop a universal set of frame-aware patterns as visual
prompts. Specially, we sample a fixed number of frames
from a video and optimize text prompts for the input query
sentence and a set of visual prompts for frames with differ-
ent temporal locations during training. During testing, the
same set of optimized visual prompts and textual prompts
are applied to all test-time videos. We refer readers to Fig. 2
for illustrations of visual prompts and text prompts intro-
duced. To the best of our knowledge, our work makes the
first attempt to utilize prompt learning to successfully im-
prove the performance of regression-based TVG tasks using
2D visual features.
Compared to 3D CNNs, 2D CNNs loses spatiotempo-
ral information of the video during feature extraction. In-
spired by the success of transformers on the vision-language
tasks [9, 22, 35, 44, 47, 54, 55] and the recent application of
prompt learning to transformers in both vision and language
domains [2,25,27,32,37,40], we choose transformer as our
base TVG model and propose to utilize prompts to compen-
sate for the lack of spatiotemporal information in 2D visual
features. Furthermore, we develop a Temporal- Distance
IoU ( TDIoU ) loss for training our proposed framework.
There are two aspects that distinguish our proposed frame-
work from existing works. First , our proposed framework is
designed to boost the performance of the regression-based
TVG methods utilizing 2D CNNs as the vision encoder,
not for transfer learning [2, 21, 26] Second , our proposed
framework utilizes 2D CNN to extract visual features from
(a) Text Prompts
(b) Frame-aware Visual Prompts
Figure 2. Text-visual prompting illustration. (a) Text prompts are directly
applied in the feature space. (b) A set of visual prompts are applied to
video frames in order.
sparsely-sampled video frames, which requires less mem-
ory and is easier to be applied in practical applications com-
pared to 3D methods [34,60–62,69,75], especially for long
videos. Furthermore, thanks to the compact 2D CNN as
the vision encoder, our proposed framework could imple-
ment the language encoder and visual encoder co-training
for better multimodal feature fusion. In summary, the con-
tributions of this work are unfolded below:
• We propose an effective and efficient framework to
train 2D TVG models, in which we leverage TVP
(text-visual prompting) to improve the utility of sparse
2D visual features without resorting to costly 3D fea-
tures. To the best of our knowledge, it is the first work
to expand the application of prompt learning for re-
solving TVG problems. Our method outperforms all
of 2D methods and achieves competitive performance
to 3D TVG methods.
• Technology-wise, we integrate visual prompt with text
prompt to co-improve the effectiveness of 2D visual
features. On top of that, we propose TDIoU (temporal-
distance IoU)-based prompt-model co-training method
to obtain high-accuracy 2D TVG models.
• Experiment-wise, we show the empirical success of
our proposal to boost the performance of 2D TVG on
Charades-STA and ActivityNet Captions datasets, e.g.,
9.79% improvement in Charades-STA, and 30.77% in
ActivityNet-Captions together with 5×inference time
acceleration over 3D TVG methods.
|
Ye_DistilPose_Tokenized_Pose_Regression_With_Heatmap_Distillation_CVPR_2023 | Abstract
In the field of human pose estimation, regression-based
methods have been dominated in terms of speed, while
heatmap-based methods are far ahead in terms of per-
formance. How to take advantage of both schemes re-mains a challenging problem. In this paper , we proposea novel human pose estimation framework termed Distil-Pose, which bridges the gaps between heatmap-based andregression-based methods. Specifically, DistilPose maxi-mizes the transfer of knowledge from the teacher model(heatmap-based) to the student model (regression-based)through Token-distilling Encoder (TDE) and SimulatedHeatmaps. TDE aligns the feature spaces of heatmap-based and regression-based models by introducing tok-enization, while Simulated Heatmaps transfer explicit guid-ance (distribution and confidence) from teacher heatmapsinto student models. Extensive experiments show that theproposed DistilPose can significantly improve the perfor-mance of the regression-based models while maintaining ef-ficiency. Specifically, on the MSCOCO validation dataset,DistilPose-S obtains 71.6% mAP with 5.36M parameters,2.38 GFLOPs, and 40.2 FPS, which saves 12.95 ×, 7.16×
computational cost and is 4.9 ×faster than its teacher
model with only 0.9 points performance drop. Furthermore,DistilPose-L obtains 74.4% mAP on MSCOCO validationdataset, achieving a new state-of-the-art among predomi-nant regression-based models. Code will be available athttps://github.com/yshMars/DistilPose .
| 1. Introduction
2D Human Pose Estimation (HPE) aims to detect the
anatomical joints of a human in a given image to estimate
the poses. HPE is typically used as a preprocessing mod-ule that participates in many downstream tasks, such as ac-tivity recognition [ 28,31], human motion analysis [ 1], mo-
∗Equal contribution. This work was done when Suhang Ye was an
intern at Tencent Y outu Lab.
†Corresponding author: [email protected]
0 1 02 03 04 05 06 07 08 0 90AP(%)
ParameterDistilPose-S(ours)DistilPose-L(ours)
RLE-Res50[ 7]RLE-HRNet-W48[ 7]
PRTR-HRNet-W32[ 8]
PRTR-Res50[ 8]SimpleBaseline-Res50[ 28]
SimpleBaseline-Res152[ 28]
0 51 0 1 5 20
GFLOPsDiameter
Figure 1. Comparisons between the SOTA methods and the
proposed DistilPose on MSCOCO valdataset. Red circles at the
upper left corner denote DistilPose. DistilPose outperforms SOTAmodels in terms of accuracy (AP), Parameter and computationalcost (GFLOPs).
tion capture [ 16], etc. Previous studies on 2D HPE can
be mainly divided into two mainstreams: heatmap-based
and regression-based methods. Regression-based methods
have significant advantages in speed and are well-suited formobile devices. However, the insufficient accuracy of re-gression models will affect the performance of downstreamtasks. In contrast, heatmap-based methods can explicitlylearn spatial information by estimating likelihood heatmaps,resulting in high accuracy on HPE tasks. But the estimationof likelihood heatmaps requires exceptionally high compu-tational cost, which leads to slow preprocessing operations.Thus, how to take advantages of both heatmap-based andregression-based methods remains a challenging problem.
One possible way to solve the above problem is to trans-
fer the knowledge from heatmap-based to regression-basedmodels [ 8,23]. However, due to the different output spaces
of regression models and heatmap models (the former isa vector, and the latter is a heatmap), transferring knowl-
edge between heatmaps and vectors faces the following twoproblems: (1) The regression head usually vectorizes the
feature map output by the backbone. And much spatialinformation will be lost through Global Average Pooling
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2163
(GAP) or Flatten operation. Thus, previous work failed
to transfer heatmap knowledge to regression models fully.(2) Compared to the coordinate regression, heatmaps natu-rally contain shape, position, and gradient information [ 3].
Due to the lack of explicit guidance for such information,regression-based methods are more difficult to learn theimplicit relationship between features and keypoints thanheatmap-based methods.
In this paper, we propose a novel human pose esti-
mation framework, DistilPose, which learns to transferthe heatmap-based knowledge from a teacher model to aregression-based student model. DistilPose mainly includesthe following two components:
(1) A knowledge-transferring module called Token-
distilling Encoder (TDE) is designed to align the featurespaces of heatmap-based and regression-based models byintroducing tokenization, which consists of a series of trans-
former encoders. TDE can capture the relationship between
keypoints and feature maps/other keypoints [ 10,32]. (2) We
propose to simulate heatmaps to obtain the heatmap infor-mation for regression-based students explicitly. The result-ing Simulated Heatmaps provide two explicit guidelines,including each keypoint’s 2D distribution and confidence.
Note that the proposed Simulated Heatmaps can be insertedbetween any heatmap-based and regression-based methodsto transfer heatmap knowledge to regression models.
DistilPose achieves comparable performance to
heatmap-based models with less computational cost andsurpasses the state-of-the-art (SOTA) regression-basedmethods. Specifically, on the MSCOCO validation dataset,DistilPose-S achieves 71.6% mAP with 5.36M parameters,
2.38GFLOPs and 40.3FPS. DistilPose-L achieves 74.4%
mAP with 21.27M parameters and 10.33GFLOPs, which
outperforms its heatmap-based teacher model in perfor-mance, parameters and computational cost. In summary,DistilPose significantly reduces the computation whileachieving competitive accuracy, bringing advantages fromboth heatmap-based and regression-based schemes. Asshown in Figure 1, DistilPose outperforms previous SOTA
regression-based methods, such as RLE [ 8] and PRTR [ 9]
with fewer parameters and GFLOPs.
Our contributions are summarized as follows:
• We propose a novel human pose estimation frame-
work, DistilPose, which is the first work to trans-fer knowledge between heatmap-based and regression-based models losslessly.
• We introduce a novel Token-distilling Encoder
(TDE) to take advantage of both heatmap-based andregression-based models. With the proposed TDE, thegap between the output space of heatmaps and coordi-nate vectors can be facilitated in a tokenized manner.
• We propose Simulated Heatmaps to model explicitheatmap information, including 2D keypoint distribu-
tions and keypoint confidences. With the aid of Sim-ulated Heatmaps, we can transform the regression-based HPE task into a more straightforward learn-ing task that fully exploits local information. Simu-lated Heatmaps can be applied to any heatmap-basedand regression-based models for transferring heatmapknowledge to regression models.
|
Yang_NeRFVS_Neural_Radiance_Fields_for_Free_View_Synthesis_via_Geometry_CVPR_2023 | Abstract
We present NeRFVS, a novel neural radiance fields
(NeRF) based method to enable free navigation in a room.
NeRF achieves impressive performance in rendering im-
ages for novel views similar to the input views while suffer-
ing for novel views that are significantly different from the
training views. To address this issue, we utilize the holis-
tic priors, including pseudo depth maps and view coverage
information, from neural reconstruction to guide the learn-
ing of implicit neural representations of 3D indoor scenes.
Concretely, an off-the-shelf neural reconstruction method is
leveraged to generate a geometry scaffold. Then, two loss
functions based on the holistic priors are proposed to im-
prove the learning of NeRF: 1) A robust depth loss that can
tolerate the error of the pseudo depth map to guide the ge-
*Work done during an internship at Huawei Noah’s Ark Lab.
†Corresponding Author.ometry learning of NeRF; 2) A variance loss to regularize
the variance of implicit neural representations to reduce the
geometry and color ambiguity in the learning procedure.
These two loss functions are modulated during NeRF op-
timization according to the view coverage information to
reduce the negative influence brought by the view coverage
imbalance. Extensive results demonstrate that our NeRFVS
outperforms state-of-the-art view synthesis methods quanti-
tatively and qualitatively on indoor scenes, achieving high-
fidelity free navigation results.
| 1. Introduction
Reconstructing an indoor scene from a collection of im-
ages and enabling users to navigate inside it freely is a core
component for many downstream applications. It is the
most challenging novel-view-synthesis (NVS) task, since it
requires high fidelity synthesis from any view , including not
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16549
only views similar to the training views (interpolation), but
also views that are significantly different from input views
(extrapolation), as shown in Fig. 1. To clarify its differ-
ence to other NVS tasks, we term it as free-view-synthesis
(FVS). The difficulties of FVS lie in not only the common
obstacles in scene reconstruction, including low-texture ar-
eas, complex scene geometry, and illumination change, but
also view imbalance, e.g., casual photos usually cover the
scene unevenly, with hundreds of frames for one table and
a few for the floor and wall, as shown in Fig. 3.
Recently, NeRF has emerged as a promising technique
for 3D reconstruction and novel view synthesis. Although
NeRF can achieve impressive interpolation performance, its
extrapolation ability is relatively poor [35], especially for
low-texture and few-shot regions. In contrast, some neural
reconstruction methods can recover the holistic scene ge-
ometry successfully with various priors [9,22,25,34], while
the synthesized images from these methods contain plenty
of artifacts and are over-smoothed. Inspired by the phe-
nomena, we demonstrate that equipping the NeRF with the
scene priors of the geometry captured from neural recon-
struction is a potential solution for indoor FVS.
Extending NeRF to enable FVS with geometry from
neural reconstruction methods is a non-trivial task with two
main challenges. 1) Depth error . The reconstructed ge-
ometry might contain some failures, including holes, depth
shifting, and floaters. The optimization of NeRF relies on
the multi-view color consistency, while these failures may
conflict with the multi-view color consistency, resulting in
artifacts. 2) Distribution ambiguity . The depth from NeRF
is a weighted sum of sampling distance. Merely supervis-
ing the depth expectation leads to arbitrary radiance distri-
bution, especially in low-texture and few-shot regions. This
ambiguous distribution leads to floaters and blur among ren-
dered images, as shown in Fig. 5.
In this paper, we propose a novel method which exploits
the holistic priors, including pseudo depth maps and view
coverage information, outputted from a geometry scaffold
to guide NeRF optimization, significantly improving qual-
ity on low-texture and few-shot regions. Specifically, to ad-
dress the depth error , we propose a robust depth loss that
can tolerate the error from the pseudo depth maps, reduc-
ing the negative impact of inaccura |
Xu_NeuralLift-360_Lifting_an_In-the-Wild_2D_Photo_to_a_3D_Object_CVPR_2023 | Abstract
Virtual reality and augmented reality (XR) bring increas-
ing demand for 3D content generation. However, creating
high-quality 3D content requires tedious work from a hu-
man expert. In this work, we study the challenging task of
lifting a single image to a 3D object and, for the first time,
demonstrate the ability to generate a plausible 3D object
with 360◦views that corresponds well with the given ref-
erence image. By conditioning on the reference image, our
model can fulfill the everlasting curiosity for synthesizing
novel views of objects from images. Our technique sheds
light on a promising direction of easing the workflows for
3D artists and XR designers. We propose a novel frame-
work, dubbed NeuralLift-360 , that utilizes a depth-aware
neural radiance representation (NeRF) and learns to craft
the scene guided by denoising diffusion models. By intro-
ducing a ranking loss, our NeuralLift-360 can be guided
with rough depth estimation in the wild. We also adopt
a CLIP-guided sampling strategy for the diffusion prior to
provide coherent guidance. Extensive experiments demon-
strate that our NeuralLift-360 significantly outperforms ex-
isting state-of-the-art baselines. Project page: https:
//vita-group.github.io/NeuralLift-360/
| 1. Introduction
Creating 3D content has been a long-standing problem
in computer vision. This problem enables various applica-
tions in game studios, home decoration, virtual reality, and
augmented reality. Over the past few decades, the manual
task has dominated real scenarios, which requires tedious
professional expert modeling. Modern artists rely on spe-
cial software tools (e.g., Blender, Maya3D, 3DS Max, etc.)
and time-demanding manual adjustments to realize imag-
inations and transform them into virtual objects. Mean-
while, automatic 3D content creation pipelines serve as ef-
fective tools to facilitate human efforts. These pipelines typ-
ically capture hundreds of images and leverage multi-view
stereo [71] to quickly model fine-grained virtual landscapes.
More recently, researchers have started aiming at a moreambitious goal, to create a 3D object from a single im-
age [9, 10, 12, 21, 27, 51, 66, 68]. This enables broad ap-
plications since it greatly reduces the prerequisite to a min-
imal request. Existing approaches can be mainly catego-
rized into two major directions. One line of work utilizes
learning priors from large-scale datasets with multi-view
images [10,21,51,66,68]. These approaches usually learn a
conditional neural network to predict 3D information based
on input images. However, due to their poor generalization
ability, drastic performance degradation is observed when
the testing image is out-of-distribution.
Another direction constructs the pipeline on top of the
depth estimation techniques [65]. Under the guidance of
monocular depth estimation networks [43, 44], the 2D im-
age is firstly back-projected to a 3D data format (e.g., point
cloud or multi-plane image) and then re-projected into novel
views. After that, advanced image inpainting techniques
are then adopted to fill missing holes [52] produced during
the projection. However, most of them can be highly af-
fected by the quality of the estimated depth maps. Though
LeRes [73] attempted to rectify the predicted depth by refin-
ing the projected 3D point cloud, their results do not gener-
alize well to an arbitrary image in the wild. Overall, the
aforementioned approaches are either adopted in limited
scenarios (e.g. face or toy examples [4, 75]), or only pro-
duce limited viewing directions [8, 70] when being applied
to scenes in the wild. Different from all these approaches,
we focus on a more challenging task and for the first time,
show promising results by lifting a single in-the-wild im-
age into a 3D object with 360◦novel views .
Attracted by the dramatic progress of neural volumet-
ric rendering on 3D reconstruction tasks, we consider
building our framework based on Neural Radiance Fields
(NeRFs) [35]. The original NeRF takes hundreds of training
views and their camera poses as inputs to learn an implicit
representation. Subsequent models dedicate tremendous ef-
forts [8, 19, 70, 75] to apply NeRF to sparse training views.
The most similar work [70] to our method proposes to opti-
mize a NeRF using only a single image and its correspond-
ing depth map. However, it renders limited views from a
small range of angles, and the prerequisite of a high-quality
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4479
depth map largely constrains its practical usage.
To address the above issues, we propose a novel frame-
work, coined as NeuralLift-360 , which aims to ease the
creation of 3D assets by building a bridge to convert diverse
in-the-wild 2D photos to sophisticated 3D contents in 360◦
views and enable its automation. The major challenge in
our work is that the content on the back side is hidden and
hard to hallucinate. To tackle these hurdles, we consider
the diffusion priors together with the monocular depth esti-
mator as the cues for hallucination. Modern diffusion mod-
els [41,48,50] are trained on a massive dataset (e.g., 5B text-
to-image pairs [24]). During inference time, they can gen-
erate impressive photorealistic photos based on simple text
inputs. By adopting these learning priors with CLIP [40]
guidance, NeuralLift-360 can generate plausible 3D consis-
tent instances that correspond well to the given reference
image while only requiring minimal additional input, the
correct user prompts. Moreover, rather than simply taking
the depth map from the pre-trained depth estimator as ge-
ometry supervision, we propose to use the relative ranking
information from the rough depth during the training pro-
cess. This simple strategy is observed to robustly mitigate
geometry errors for depth estimations in the wild.
Our contributions can be summarized as follows,
• Given a single image in the wild, we demonstrate
promising results of them being lifted to 3D. We use
NeRF as an effective scene representation and inte-
grate prior knowledge from the diffusion model.
• We propose a CLIP-guided sampling strategy that ef-
fectively marries the prior knowledge from the diffu-
sion model with the reference image.
• When the reference image is hard to describe exactly,
we finetune the diffusion model on the single image
while maintaining its ability to generate diverse con-
tents to guide the NeRF training.
• We introduce scale-invariant depth supervision that
uses the ranking information. This design alleviates
the need for accurate multi-view consistent depth esti-
mation and broadens the application of our algorithms.
|
Zhang_Implicit_Surface_Contrastive_Clustering_for_LiDAR_Point_Clouds_CVPR_2023 | Abstract
Self-supervised pretraining on large unlabeled datasets
has shown tremendous success in improving the task per-
formance of many 2D and small scale 3D computer vi-
sion tasks. However, the popular pretraining approaches
have not been impactfully applied to outdoor LiDAR point
cloud perception due to the latter’s scene complexity and
wide range. We propose a new self-supervised pretrain-
ing method ISCC with two novel pretext tasks for LiDAR
point clouds. The first task uncovers semantic information
by sorting local groups of points in the scene into a glob-
ally consistent set of semantically meaningful clusters using
contrastive learning, complemented by a second task which
reasons about precise surfaces of various parts of the scene
through implicit surface reconstruction to learn geomet-
ric structures. We demonstrate their effectiveness through
transfer learning on 3D object detection and semantic seg-
mentation in real world LiDAR scenes. We further design
an unsupervised semantic grouping task to show that our
approach learns highly semantically meaningful features.
| 1. Introduction
Robust and reliable 3D LiDAR perception is critical for
autonomous driving systems. Unlike images, LiDAR pro-
vides unambiguous measurements of the vehicle’s 3D en-
vironment. A plethora of perception models have arisen in
recent years to enable a variety of scene understanding tasks
using LiDAR input, such as object detection and semantic
segmentation. However, training these models generally re-
lies on a large quantity of human annotated data, which is
tedious and expensive to produce.
Recently, self-supervised learning has attracted signifi-
cant research attention [5, 7,16–18], as it has the potential
to increase performance on downstream tasks with limited
quantities of annotated data in the image domain. However,
self-supervised learning has shown less impact for outdoor
*Work done at AWS AI.
Figure 1. With frame tas input, we 1) apply contrastive cluster
learning to reason about unsupervised semantic clustering and also
enforce feature consistency across different views, and 2) conduct
implicit surface reconstruction in randomly cropped out regions.
3D point clouds. A core difficulty stems from the rela-
tive difficulty in designing appropriate pretext tasks used
for self-supervision. In the image domain, the ImageNet
dataset [35] provides millions of canonical images of ev-
eryday objects, allowing for straightforward manipulations
to generate pretext tasks that lead to strong object-centric or
semantic group-centric feature learning. While large-scale
unlabeled outdoor LiDAR datasets are relatively easy to col-
lect, the data samples exhibit a high level of scene complex-
ity, sparsity of measurements, and heavy dependency on the
observer positioning and sensor type. These factors pose
great challenges for creating useful pretext tasks.
Recent works have proposed 3D-specific self-supervised
learning, starting with scene-level contrastive learning [44,
49], followed by the work of [30] and [47], which use finer,
region-level granularity to better encode individual compo-
nents of large scale LiDAR point clouds. However, these
techniques do not explicitly make use of regularities in 3D
shapes and surfaces.
In our work, we propose ISCC (Implicit Surface Con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21716
trastive Clustering) which consists of two new pretext tasks
to automatically learn semantically meaningful feature ex-
traction without annotations for LiDAR point clouds. The
first task focuses on learning semantic information by sort-
ing local groups of points in the scene into a globally consis-
tent set of semantically meaningful clusters using the con-
trastive learning setup [5]. This is augmented with a second
task which reasons about precise surfaces of various parts
of the scene through implicit surface reconstruction to learn
geometric regularities. A high level overview is found in
Figure 1. Furthermore, we showcase a novel procedure to
generate training signals for implicit surface reasoning in
the absence of dense 3D surface meshes which are difficult
to obtain for large scale LiDAR point clouds.
Using the large real world KITTI [13] and Waymo [37]
datasets, we show that our approach is superior to re-
lated state-of-the-art self-supervised learning techniques in
downstream finetuning performance in semantic segmenta-
tion and object detection. For example, we see a 72% gain
in segmentation performance on SemanticKITTI versus the
state-of-the-art [48] when fine-tuned with 1% of the anno-
tations, and exceeds the accuracy achieved by using twice
the annotations with random initialized weights. As well,
we analyze the semantic consistency of the learned features
through a new unsupervised semantic grouping task, and
show that our learned features are able to form semantic
groups even in the absence of supervised fine-tuning.
|
Xu_UniDexGrasp_Universal_Robotic_Dexterous_Grasping_via_Learning_Diverse_Proposal_Generation_CVPR_2023 | Abstract
In this work, we tackle the problem of learning universal
robotic dexterous grasping from a point cloud observation
under a table-top setting. The goal is to grasp and lift up ob-
jects in high-quality and diverse ways and generalize across
hundreds of categories and even the unseen. Inspired by
successful pipelines used in parallel gripper grasping, we
split the task into two stages: 1) grasp proposal (pose) gen-
eration and 2) goal-conditioned grasp execution. For the
first stage, we propose a novel probabilistic model of grasp
pose conditioned on the point cloud observation that fac-
torizes rotation from translation and articulation. Trained
*Equal contribution.
†Corresponding author.on our synthesized large-scale dexterous grasp dataset, this
model enables us to sample diverse and high-quality dex-
terous grasp poses for the object point cloud. For the sec-
ond stage, we propose to replace the motion planning used
in parallel gripper grasping with a goal-conditioned grasp
policy, due to the complexity involved in dexterous grasp-
ing execution. Note that it is very challenging to learn
this highly generalizable grasp policy that only takes re-
alistic inputs without oracle states. We thus propose sev-
eral important innovations, including state canonicaliza-
tion, object curriculum, and teacher-student distillation. In-
tegrating the two stages, our final pipeline becomes the first
to achieve universal generalization for dexterous grasping,
demonstrating an average success rate of more than 60%
on thousands of object instances, which significantly out-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4737
performs all baselines, meanwhile showing only a minimal
generalization gap.
| 1. Introduction
Robotic grasping is a fundamental capability for an agent
to interact with the environment and serves as a prerequi-
site to manipulation, which has been extensively studied
for decades. Recent years have witnessed great progress
in developing grasping algorithms for parallel grippers
[8, 16, 17, 20, 49, 50] that carry high success rate on univer-
sally grasping unknown objects. However, one fundamental
limitation of parallel grasping is its low dexterity which lim-
its its usage to complex and functional object manipulation.
Dexterous grasping provides a more diverse way to grasp
objects and thus is of vital importance to robotics for func-
tional and fine-grained object manipulation [2, 29, 38, 40,
52]. However, the high dimensionality of the actuation
space of a dexterous hand is both the advantage that en-
dows it with such versatility and the major cause of the diffi-
culty in executing a successful grasp. As a widely used five-
finger robotic dexterous hand, ShadowHand [1] amounts to
26 degrees of freedom (DoF), in contrast with 7 DoF for a
typical parallel gripper. Such high dimensionality magni-
fies the difficulty in both generating valid grasp poses and
planning the execution trajectories, and thus distinguishes
the dexterous grasping task from its counterpart for paral-
lel grippers. Several works have tackled the grasping pose
synthesis problem [6, 28, 33, 49], however, they all assume
oracle inputs (full object geometry and states). Very few
works [9,38] tackle dexterous grasping in a realistic robotic
setting, but so far no work yet can demonstrate universal
and diverse dexterous grasping that can well generalize to
unseen objects.
In this work, we tackle this very challenging task: learn-
ing universal dexterous grasping skills that can generalize
well across hundreds of seen and unseen object categories
in a realistic robotic setting and only allow us to access
depth observations and robot proprioception information.
Our dataset contains more than one million grasps for 5519
object instances from 133 object categories, which is the
largest robotic dexterous grasping benchmark to evaluate
universal dexterous grasping.
Inspired by the successful pipelines from parallel grip-
pers, we propose to decompose this challenging task into
two stages: 1) dexterous grasp proposal generation , in
which we predict diverse grasp poses given the point cloud
observations; and 2) goal-conditioned grasp execution , in
which we take one grasp goal pose predicted by stage 1 as a
condition and generates physically correct motion trajecto-
ries that comply with the goal pose. Note that both of these
two stages are indeed very challenging, for each of which
we contribute several innovations, as explained below.For dexterous grasp proposal generation, we devise a
novel conditional grasp pose generative model that takes
point cloud observations and is trained on our synthesized
large-scale table-top dataset. Here our approach empha-
sizes the diversity in grasp pose generation, since the way
we humans manipulate objects can vary in many different
ways and thus correspond to different grasping poses. With-
out diversity, it is impossible for the grasping pose gen-
eration to comply with the demand of later dexterous ma-
nipulation. Previous works [22] leverages CV AE to jointly
model hand rotation, translation, and articulations and we
observe that such CV AE suffers from severe mode collapse
and can’t generate diverse grasp poses, owing to its lim-
ited expressivity when compared to conditional normaliz-
ing flows [12, 13, 15, 23, 35] and conditional diffusion mod-
els [5, 43, 46]. However, no works have developed normal-
izing flows and diffusion models that work for the grasp
pose space, which is a Cartesian product of SO(3) of hand
rotation and a Euclidean space of the translation and joint
angles. We thus propose to decompose this conditional
generative model into two conditional generative models: a
conditional rotation generative model, namely GraspIPDF,
leveraging ImplicitPDF [34] (in short, IPDF) and a con-
ditional normalizing flow, namely GraspGlow, leveraging
Glow [23]. Combining these two modules, we can sample
diverse grasping poses and even select what we need ac-
cording to language descriptions. The sampled grasps can
be further refined to be more physically plausible via Con-
tactNet, as done in [22].
For our grasp execution stage, we learn a goal-
conditioned policy that can grasp any object in the way
specified by the grasp goal pose and only takes realistic in-
puts: the point cloud observation and robot proprioception
information, as required by real robot experiments. Note
that reinforcement learning (RL) algorithms usually have
difficulties with learning such a highly generalizable pol-
icy, especially when the inputs are visual signals without
ground truth states. To tackle this challenge, we leverage
a teacher-student learning framework that first learns an or-
acle teacher model that can access the oracle state inputs
and then distill it to a student model that only takes realis-
tic inputs. Even though the teacher policy gains access to
oracle information, making it successful in grasping thou-
sands of different objects paired with diverse grasp goals is
still formidable for RL. We thus introduce two critical inno-
vations: a canonicalization step that ensures SO(2) equiv-
ariance to ease the policy learning; and an object curricu-
lum that first learns to grasp one object with different goals,
then one category, then many categories, and finally all cat-
egories.
Extensive experiments demonstrate the remarkable per-
formance of our pipelines. In the grasp proposal generation
stage, our pipeline is the only method that exhibits high di-
4738
versity while maintaining the highest grasping quality. The
whole dexterous grasping pipeline, from the vision to pol-
icy, again achieves impressive performance in our simula-
tion environment and, for the first time, demonstrates a uni-
versal grasping policy with more than 60% success rate and
remarkably outperforms all the baselines. We will make the
dataset and the code publicly available to facilitate future
research.
|
Yang_ReCo_Region-Controlled_Text-to-Image_Generation_CVPR_2023 | Abstract
Recently, large-scale text-to-image (T2I) models have
shown impressive performance in generating high-fidelity
images, but with limited controllability, e.g., precisely spec-
ifying the content in a specific region with a free-form text
description. In this paper, we propose an effective technique
for such regional control in T2I generation. We augment T2I
models’ inputs with an extra set of position tokens, which
represent the quantized spatial coordinates. Each region
is specified by four position tokens to represent the top-left
and bottom-right corners, followed by an open-ended nat-
ural language regional description. Then, we fine-tune a
pre-trained T2I model with such new input interface. Our
model, dubbed as ReCo (Region-Controlled T2I), enables
the region control for arbitrary objects described by open-
ended regional texts rather than by object labels from a
constrained category set. Empirically, ReCo achieves bet-
ter image quality than the T2I model strengthened by posi-tional words (FID: 8.82→7.36, SceneFID: 15.54→6.51
on COCO), together with objects being more accurately
placed, amounting to a 20.40% region classification accu-
racy improvement on COCO. Furthermore, we demonstrate
that ReCo can better control the object count, spatial re-
lationship, and region attributes such as color/size, with
the free-form regional description. Human evaluation on
PaintSkill shows that ReCo is +19.28% and+17.21% more
accurate in generating images with correct object count and
spatial relationship than the T2I model. Code is available
athttps://github.com/microsoft/ReCo .
| 1. Introduction
Text-to-image (T2I) generation aims to generate faith-
ful images based on an input text query that describes the
image content. By scaling up the training data and model
size, large T2I models [31, 34, 36, 45] have recently shown
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14246
remarkable capabilities in generating high-fidelity images.
However, the text-only query allows limited controllability,
e.g., precisely specifying the content in a specific region.
The naive way of using position-related text words, such
as “top left” and “bottom right,” often results in ambigu-
ous and verbose input queries, as shown in Figure 2 (a).
Even worse, when the text query becomes long and com-
plicated, or describes an unusual scene, T2I models [31,45]
might overlook certain details and rather follow the visual
or linguistic training prior. These two factors together make
region control difficult. To get the desired image, users usu-
ally need to try a large number of paraphrased queries and
pick an image that best fits the desired scene. The process
known as “prompt engineering” is time-consuming and of-
ten fails to produce the desired image.
The desired region-controlled T2I generation is closely
related to the layout-to-image generation [7,9,22,23,34,38,
44, 49]. As shown in Figure 2 (b), layout-to-image mod-
els take all object bounding boxes with labels from a close
set of object vocabulary [24] as inputs. Despite showing
promise in region control, they can hardly understand free-
form text inputs, nor the region-level combination of open-
ended text descriptions and spatial positions. The two in-
put conditions of text and box provide complementary re-
ferring capabilities. Instead of separately modeling them as
in text-to-image and layout-to-image generations, we study
“region-controlled T2I generation” that seamlessly com-
bines these two input conditions. As shown in Figure 2 (c),
the new input interface allows users to provide open-ended
descriptions for arbitrary image regions, such as precisely
placing a “brown glazed chocolate donut” in a specific area.
To this end, we propose ReCo (Region-Controlled T2I)
that extends pre-trained T2I models to understand spatial
coordinate inputs. The core idea is to introduce an ex-
tra set of input position tokens to indicate the spatial po-
sitions. The image width/height is quantized uniformly into
Nbinsbins. Then, any float-valued coordinate can be ap-
proximated and tokenized by the nearest bin. With an extra
embedding matrix ( EP), the position token can be mapped
onto the same space as the text token. Instead of designing a
text-only query with positional words “in the topred donut”
as in Figure 2 (a), ReCo takes region-controlled text inputs
“<x 1>, <y 1>, <x 2>, <y 2>red donut,” where <x> ,<y>
are the position tokens followed by the corresponding free-
form text description. We then fine-tune a pre-trained T2I
model with EPto generate the image from the extended
input query. To best preserve the pre-trained T2I capabil-
ity, ReCo training is designed to be similar to the T2I pre-
training, i.e., introducing minimal extra model parameters
(EP), jointly encoding position and text tokens with the text
encoder, and prefixing the image description before the ex-
tended regional descriptions in the input query.
Figure 1 visualizes ReCo’s use cases and capabilities. As
……
……
Figure 2. (a) With positional words ( e.g., bottom/top/left/right and
large/small/tall/long), the T2I model (Stable Diffusion [34]) does
not manage to create objects with desired properties. (b)Layout-
to-image generation [22, 34, 38, 49] takes all object boxes and la-
bels as the input condition, but only works well with constrained
object labels. (c)Our ReCo model synergetically combines the
text and box referring, allowing users to specify an open-ended re-
gional text description precisely at any image region.
shown in Figure 1 (a), ReCo could reliably follow the input
spatial constraints and generate the most plausible images
by automatically adjusting object statues, such as the view
(front/side) and type (single-/double-deck) of the “bus.” Po-
sition tokens also allow the user to provide free-form re-
gional descriptions, such as “an orange cat wearing a red
hat” at a specific location. Furthermore, we empirically ob-
serve that position tokens are less likely to get overlooked or
misunderstood than text words. As shown in Figure 1 (b),
ReCo has better control over object count, spatial relation-
ship, and size properties, especially when the query is long
and complicated, or describes a scene that is less common
in real life. In contrast, T2I models [34] may struggle with
generating scenes with correct object counts (“ten”), rela-
tionships (“boat below traffic light”), relative sizes (“chair
larger than airplane”), and camera views (“zoomed out”).
To evaluate the region control, we design a comprehen-
sive experiment benchmark based on a pre-trained regional
object classifier and an object detector. The object classi-
fier is applied on the generated image regions, while the
detector is applied on the whole image. A higher accuracy
means a better alignment between the generated object lay-
out and the region positions in user queries. On the COCO
dataset [24], ReCo shows a better object classification ac-
curacy ( 42.02%→62.42%) and detector averaged preci-
sion ( 2.3→32.0), compared with the T2I model with care-
fully designed positional words. For image generation qual-
ity, ReCo improves the FID from 8.82to7.36, and Scene-
FID from 15.54to6.51. Furthermore, human evaluations
on PaintSkill [5] show +19.28% and+17.21% accuracy
gain in more correctly generating the query-described ob-
ject count and spatial relationship, indicating ReCo’s capa-
bility in helping T2I models to generate challenging scenes.
Our contributions are summarized as follows.
2
14247
• We propose ReCo that extends pre-trained T2I mod-
els to understand coordinate inputs. Thanks to the in-
troduced position tokens in the region-controlled input
query, users can easily specify free-form regional de-
scriptions in arbitrary image regions.
• We instantiate ReCo based on Stable Diffusion. Ex-
tensive experiments show that ReCo strictly follows
the regional instructions from the input query, and also
generates higher-fidelity images.
• We design a comprehensive evaluation benchmark to
validate ReCo’s region-controlled T2I generation ca-
pability. ReCo significantly improves both the region
control accuracy and the image generation quality over
a wide range of datasets and designed prompts.
|
Yu_Turning_a_CLIP_Model_Into_a_Scene_Text_Detector_CVPR_2023 | Abstract
The recent large-scale Contrastive Language-Image Pre-
training (CLIP) model has shown great potential in various
downstream tasks via leveraging the pretrained vision and
language knowledge. Scene text, which contains rich textual
and visual information, has an inherent connection with a
model like CLIP . Recently, pretraining approaches based on
vision language models have made effective progresses in
the field of text detection. In contrast to these works, this
paper proposes a new method, termed TCM, focusing on
Turning the CLIP Model directly for text detection without
pretraining process. We demonstrate the advantages of the
proposed TCM as follows: (1) The underlying principle of
our framework can be applied to improve existing scene text
detector. (2) It facilitates the few-shot training capability of
existing methods, e.g., by using 10% of labeled data, we sig-
nificantly improve the performance of the baseline method
with an average of 22% in terms of the F-measure on 4
benchmarks. (3) By turning the CLIP model into existing
scene text detection methods, we further achieve promising
domain adaptation ability. The code will be publicly released
at https://github.com/wenwenyu/TCM.
| 1. Introduction
Scene text detection is a long-standing research topic
aiming to localize the bounding box or polygon of each
text instance from natural images, as it has wide practical
applications scenarios, such as office automation, instant
translation, automatic driving, and online education. With
the rapid development of fully-supervised deep learning
technologies, scene text detection has achieved remarkable
progresses. Although supervised approaches have made
remarkable progress in the field of text detection, they require
extensive and elaborate annotations, e.g., character-level,
word-level, and text-line level bounding boxes, especially
polygonal boxes for arbitrarily-shaped scene text. Therefore,
it is very important to investigate text detection methods
*Equal contribution.†Corresponding author.
(a) Enhancing backbone with cross -modal interactions
(b) Enhancing backbone with text (c) OursImage
Encoder
Image -level TextImage
Recognition
DecoderImage
Encoder
Bbox/MaskDetection
HeadImage
AdaptPre-training Fine -tuningImage
EncoderImage
Cross -Modal InteractionsImage
Encoder
Bbox/MaskDetection
HeadImage
Text
EncoderAdaptPre-training Fine -tuning
Image
EncoderImage
MatchingText
EncoderText Prompt
Bbox/MaskDetection
HeadFine -tuning(Partial) Text
Designed Pretext Tasks
MaskFigure 1. Comparisons of different paradigms of using text knowl-
edge for scene text detection.
under small amount of labeled data, i.e., few-shot training.
Recently, through leveraging the pretrained vision and
language knowledge, the large-scale Contrastive Language-
Image Pretraining (CLIP) model [26] has demonstrated its
significance in various downstream tasks. e.g., image classi-
fication [53], object detection [5], and semantic segmenta-
tion [12, 27, 43].
Compared to general object detection, scene text in natu-
ral images usually presents with both visual and rich char-
acter information, which has a natural connection with the
CLIP model. Therefore, how to make full use of cross-modal
information from visual, semantic, and text knowledge to
improve the performance of the text detection models re-
ceives increasing attentions in recent studies. For exam-
ples, Song et al. [29], inspired by CLIP, adopts fine-grained
cross-modality interaction to align unimodal embeddings
for learning better representations of backbone via carefully
designed pretraining tasks. Xue et al. [46] presents a weakly
supervised pretraining method to jointly learn and align vi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
6978
sual and partial textual information for learning effective
visual text representations for scene text detection. Wan et
al.[35] proposes self-attention based text knowledge mining
to enhance backbone via an image-level text recognition
pretraining tasks.
Different from these works, as shown in Figure 1, this
paper focuses on turning the CLIP model for text detection
without pretraining process. However, it is not trivial to
incorporate the CLIP model into a scene text detector. The
key is seeking a proper method to exploit the visual and
semantic prior information conditioned on each image. In
this paper, we develop a new method for scene text detec-
tion, termed as TCM, short for Turning a CLIPModel into
a scene text detector, which can be easily plugged to im-
prove the scene text detection frameworks. We design a
cross-modal interaction mechanism through visual prompt
learning, which is implemented by cross-attention to recover
the locality feature from the image encoder of CLIP to cap-
ture fine-grained information to respond to the coarse text
region for the subsequent matching between text instance
and language. Besides, to steer the pretrained knowledge
from the text encoder conditioned independently on different
input images, we employ the predefined language prompt,
learnable prompt, and a language prompt generator using
simple linear layer to get global image information. In ad-
dition, we design an instance-language matching method to
align the image embedding and text embedding, which en-
courages the image encoder to explicitly refine text regions
from cross-modal visual-language priors. Compared to pre-
vious pretraining approaches, our method can be directly
finetuned for the text detection task without pretraining pro-
cess, as elaborated in Fig. 1. In this way, the text detector
can absorb the rich visual or semantic information of text
from CLIP. We summarize the advantages of our method as
follows:
•We construct a new text detection framework, termed
as TCM, which can be easily plugged to enhance the
existing detectors.
•Our framework can enable effective few-shot training
capability. Such advantage is more obvious when using
less training samples compared to the baseline detectors.
Specifically, by using 10% of labeled data, we improve
the performance of the baseline detector by an average
of 22% in terms of the F-measure on 4 benchmarks.
•TCM introduces promising domain adaptation ability,
i.e., when using training data that is out-of-distribution
of the testing data, the performance can be significantly
improved. Such phenomenon is further demonstrated
by a NightTime-ArT text dataset1, which we collected
from the ArT dataset.
1NightTime-ArT Download Link•Without pretraining process using specific pretext tasks,
TCM can still leverage the prior knowledge from the
CLIP model, outperforming previous scene text pre-
training methods [29, 35, 46].
|
Xu_Probabilistic_Knowledge_Distillation_of_Face_Ensembles_CVPR_2023 | Abstract
Mean ensemble (i.e. averaging predictions from multiple
models) is a commonly-used technique in machine learning
that improves the performance of each individual model. We
formalize it as feature alignment for ensemble in open-set
face recognition and generalize it into Bayesian Ensemble
Averaging (BEA) through the lens of probabilistic modeling.
This generalization brings up two practical benefits that ex-
isting methods could not provide: (1) the uncertainty of a
face image can be evaluated and further decomposed into
aleatoric uncertainty and epistemic uncertainty, the latter
of which can be used as a measure for out-of-distribution
detection of faceness; (2) a BEA statistic provably reflects
the aleatoric uncertainty of a face image, acting as a mea-
sure for face image quality to improve recognition perfor-
mance. To inherit the uncertainty estimation capability from
BEA without the loss of inference efficiency, we propose
BEA-KD, a student model to distill knowledge from BEA.
BEA-KD mimics the overall behavior of ensemble members
and consistently outperforms SOTA knowledge distillation
methods on various challenging benchmarks.
| 1. Introduction
Knowledge Distillation (KD) is an active research area
that has profound benefits for model compression, wherein
competitive recognition performance can be achieved by
smaller models (student models) via a distillation process
from teacher models. As such, smaller models can be de-
ployed into space-constrained environments such as mobile
and embedded devices.
There has been abundant literature in KD for face recog-
nition [22,33,34]. However, all the existing approaches fall
into the “one-teacher-versus-one-student” paradigm. This
learning paradigm has several limitations. Firstly, a single
teacher can be biased, which further results in biased esti-
*Equal contribution.
Figure 1. A conceptual illustration of BEA and BEA-KD. Given
a face image x, we have n= 5 probabilistic ensemble members
{r-vMF (µi, κi)}n
i=1(marked by light blue). Bayesian ensemble
averaging (marked by dark blue) returns a single r-vMF (˜µx,¯κ(n)
x)
that accounts for the expected positions and confidence by all the
ensemble members. To emulate the ensemble’s probabilistic be-
havior, we employ a parametrized distribution qϕ,φ(z|x)to ap-
proximate BEA.
mates of face feature embeddings given by a student after
knowledge distillation from the biased teacher. Secondly, it
only yields point estimates of face feature embeddings, un-
able to provide uncertainty measure for face recognition in
a safety-sensitive scenario.
Compared with single-teacher KD, KD from multiple
teachers (a.k.a. ensemble KD) is beneficial and has been
extensively explored in literature [9, 11, 29, 32, 36, 37].
However, these approaches are designed solely for closed-
setclassification tasks, distilling logits in a fixed simplex
space via KL divergence (as the label set remains the same
throughout training and test). In contrast, face recognition
is inherently an open-set problem where classes cannot be
known a priori. More specifically, face identities appear-
ing during the inference stage scarcely overlap with those
in the training phase. Consequently, without a fixed sim-
plex space for logit distillation, existing approaches cannot
be readily applied to face recognition. As will be shown
in our empirical studies, existing closed-set KD approaches
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3489
exhibit inferior performance in face recognition tasks with
million-scale label sets.
How we treat teachers highly affects KD performance.
Unlike prior art [20, 26] that takes average of predictions
(termed ‘mean ensemble’ throughout this paper), we treat
teachers as draws in a probabilistic manner. We find that
this treatment leads to a generalization of mean ensemble,
namely Bayesian Ensemble Averaging (BEA), which fur-
ther brings up two practical benefits that existing methods
could not provide: (1) the uncert |
Yang_MIANet_Aggregating_Unbiased_Instance_and_General_Information_for_Few-Shot_Semantic_CVPR_2023 | Abstract
Existing few-shot segmentation methods are based on
the meta-learning strategy and extract instance knowledge
from a support set and then apply the knowledge to seg-
ment target objects in a query set. However, the extracted
knowledge is insufficient to cope with the variable intra-
class differences since the knowledge is obtained from a
few samples in the support set. To address the problem,
we propose a multi-information aggregation network (MI-
ANet) that effectively leverages the general knowledge, i.e.,
semantic word embeddings, and instance information for
accurate segmentation. Specifically, in MIANet, a general
information module (GIM) is proposed to extract a general
class prototype from word embeddings as a supplement to
instance information. To this end, we design a triplet loss
that treats the general class prototype as an anchor and
samples positive-negative pairs from local features in the
support set. The calculated triplet loss can transfer seman-
tic similarities among language identities from a word em-
bedding space to a visual representation space. To alle-
viate the model biasing towards the seen training classes
and to obtain multi-scale information, we then introduce a
non-parametric hierarchical prior module (HPM) to gener-
ate unbiased instance-level information via calculating the
pixel-level similarity between the support and query image
features. Finally, an information fusion module (IFM) com-
bines the general and instance information to make pre-
dictions for the query image. Extensive experiments on
PASCAL-5iand COCO-20ishow that MIANet yields supe-
rior performance and set a new state-of-the-art. Code is
available at github.com/Aldrich2y/MIANet.
| 1. Introduction
The challenge of few-shot semantic segmentation (FSS)
is how to effectively use one or five labeled samples to seg-
ment a novel class. Existing few-shot segmentation meth-
*Corresponding author ([email protected]).
Support imageExtract
featuresExtract
features
CNN
Category Labels
bottle cat dog
person boattv bird sheepCategory Labels
bottle cat dog
person boattv bird sheep
get the vectorInstance Info
Generation
General Info
GenerationInfo
Fusion
Prediction
Instance Info Path Main Path General Info Path
(b) Our semantic segmentation network
Support image
Extract
features
Extract
featuresCNNSupport mask
Guidance
information
ClassifierPrediction
(a) Existing few -shot segmentation methods
Query image
Query image
w2vbottle
Query image
Figure 1. Comparison between (a) existing FSS methods and (b)
proposed MIANet. (a) Existing methods extract instance-level
knowledge from the support images, which is not able to cope with
large intra-class variation. (b) our MIANet extracts instance-level
knowledge from the support images and obtains general class in-
formation from word embeddings. These two types of information
benefit the final segmentation.
ods [28, 30, 33, 37] adopt the metric-based meta-learning
strategy [26, 29]. The strategy is typically composed of
two stages: meta-training and meta-testing. In the meta-
training stage, models are trained by plenty of independent
few-shot segmentation tasks. In meta-testing, models can
thus quickly adapt and extrapolate to new few-shot tasks of
unseen classes and segment the novel categories since each
training task involves a different seen class.
As shown in Figure 2, natural images of same categories
have semantic differences and perspective distortion, which
leads to intra-class differences. Current FSS approaches
segment a query image by matching the guidance informa-
tion from the support set with the query features (Figure 1
(a)). Unfortunately, the correlation between the support im-
age and the query image is not enough to support the match-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7131
Chair Bird Bird Aeroplane
(a) (b) Perspective distortion Semantic differencesFigure 2. We define two types of intra-class variation. (a) The
object in each column has the same semantic label but belongs to
different fine-grained categories. (b) The object belonging to the
same category differs greatly in appearance due to the existence of
perspective distortion.
ing strategy in some support-query pairs due to the diversity
of intra-class differences, which affects the generalization
performance of the models. On the other hand, modules
with numerous learnable parameters are devised by FSS
methods to better use the limited instance information. And
lots of few-shot segmentation tasks of seen classes are used
to train the models in the meta-training stage. Although cur-
rent methods freeze the backbone, the rest parameters will
inevitably fit the feature distribution of the training data and
make the trained models misclassify the seen training class
to the unseen testing class.
To address the above issues, a multi-information ag-
gregation network is proposed for accurate segmentation.
Specifically, we first design a general information module
(GIM) to produce a general class prototype by leveraging
class-based word embeddings. This prototype represents
general information for the class, which is beyond the sup-
port information and can supplement some missing class
information due to intra-class differences. As shown in Fig-
ure 1 (b), the semantic word vectors for each class can be
obtained by a pre-trained language model, i.e., word2vec .
Then, GIM takes the word vector and a support prototype
as input to get the general prototype. Next, a well-designed
triplet loss [25] is applied to achieve the alignment be-
tween the semantic prototype and the visual features. The
triplet loss extracts positive-negative pairs from local fea-
tures which distinguishes our method from other improved
triplets [3,4,11]. The semantic similarity between the word
embeddings in a word embedding space can therefore be
transferred to a visual embedding space. Finally, the pro-
jected prototype is supplemented into the main branch as
the general information of the category for information fu-
sion to alleviate the intra-class variance problem.
Moreover, to capture the instance-level details and allevi-ate the model biasing towards the seen classes, we propose
a non-parametric hierarchical prior module (HPM). HPM
works in two aspects. (1) HPM is class-agnostic since it
does not require training. (2) HPM can generate hierarchi-
cal activation maps for the query image by digging out the
relationship between high-level features for accurate seg-
mentation of unseen classes. In addition, we build informa-
tion channels between different scales to preserve discrim-
inative information in query features. Finally, the unbiased
instance-level information and the general information are
aggregated by an information fusion module (IFM) to seg-
ment the query image. Our main contributions are summa-
rized as follows:
(1) We propose a multi-information aggregation network
(MIANet) to aggregate general information and unbi-
ased instance-level information for accurate segmenta-
tion.
(2) To the best of our knowledge, this is the first time to
use word embeddings in FSS, and we design a general
information module (GIM) to obtain the general class
information from word embeddings for each class.
The module is optimized through a well-designed
triplet loss and can provide general class information
to alleviate intra-class differences.
(3) A non-parametric hierarchical prior module (HPM) is
proposed to supply MIANet with unbiased instance-
level segmentation knowledge, which provides the
prior information of the query image on multi-scales
and alleviates the bias problem in testing.
(4) Our MIANet achieves state-of-the-art results on two
few-shot segmentation benchmarks, i.e., PASCAL-5i
and COCO-20i. Extensive experiments validate the ef-
fectiveness of each component in our MIANet.
|
Yan_Towards_Trustable_Skin_Cancer_Diagnosis_via_Rewriting_Models_Decision_CVPR_2023 | Abstract
Deep neural networks have demonstrated promising per-
formance on image recognition tasks. However, they may
heavily rely on confounding factors, using irrelevant arti-
facts or bias within the dataset as the cue to improve perfor-
mance. When a model performs decision-making based on
these spurious correlations, it can become untrustable and
lead to catastrophic outcomes when deployed in the real-
world scene. In this paper, we explore and try to solve this
problem in the context of skin cancer diagnosis. We intro-
duce a human-in-the-loop framework in the model training
process such that users can observe and correct the model’s
decision logic when confounding behaviors happen. Specif-
ically, our method can automatically discover confounding
factors by analyzing the co-occurrence behavior of the sam-
ples. It is capable of learning confounding concepts using
easily obtained concept exemplars. By mapping the black-
box model’s feature representation onto an explainable con-
cept space, human users can interpret the concept and in-
tervene via first order-logic instruction. We systematically
evaluate our method on our newly crafted, well-controlled
skin lesion dataset and several public skin lesion datasets.
Experiments show that our method can effectively detect
and remove confounding factors from datasets without any
prior knowledge about the category distribution and does
not require fully annotated concept labels. We also show
that our method enables the model to focus on clinical-
related concepts, improving the model’s performance and
trustworthiness during model inference.
| 1. Introduction
Deep neural networks have achieved excellent perfor-
mance on many visual recognition tasks [11,14,35]. Mean-
while, there are growing concerns about the trustworthiness
of the model’s black-box decision-making process. In med-
ical imaging applications, one of the major issues is deep
learning models’ confounding behaviors using irrelevant ar-
tifacts ( i.e. rulers, dark corners) or bias ( i.e. image back-
Figure 1. Our method allows people to correct the model’s con-
founding behavior within the skin lesion training set via rewriting
the model’s logic.
grounds, skin tone) as the cue to make the final predictions.
These spurious correlations in the training distribution can
make models fragile when novel testing samples are pre-
sented. Therefore, transparency of decision-making and
human-guided bias correction will significantly increase the
reliability and trustworthiness of model deployments in a
life-critical application scenario like cancer diagnosis.
For instance, due to the nature of dermatoscopic images,
the skin cancer diagnosis often involves confounding fac-
tors [4, 22, 24, 40], i.e., dark corners, rulers, and air pock-
ets. Bissoto et al . [40] shows that deep learning models
trained on common skin lesion datasets can be biased. Sur-
prisingly, they found the model can reach an AUC of 73%,
even though lesions in images are totally occluded. To bet-
ter understand the issue of confounding behaviors, we il-
lustrate a motivating example. Fig. 1 shows a represen-
tative confounding factor “dark corners” in the ISIC2019-
2020 [36, 38] dataset, where the presence of dark corners
is much higher in the melanoma images than in benign im-
ages. A model trained on this dataset tends to predict a le-
sion as melanoma when dark corners appear. This model is
undoubtedly untrustable and is catastrophic for deployment
in real clinical practice. To deal with this nuisance and im-
prove the model’s trustworthiness, an ideal method would
be: i ) the model itself is explainable so that humans can un-
derstand the prediction logic behind it. and ii) the model has
an intervening mechanism that allows humans to correct the
model once confounding behavior happens. Through this
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11568
way, the model can avoid dataset biasing issues and rely on
expected clinical rules when needed to diagnose.
In this work, our overall goal is to improve model trans-
parency as well as develop a mechanism that human-user
can intervene the model training process when confound-
ing factors are observed. The major difference between our
proposed solution and existing works [19,28,29,33] are: (1)
Formodel visualization , we focus on generating human-
understandable textual concepts rather than pixel-wise at-
tribution explanations like Class Activation Maps [42]. (2)
Forconcept learning , we do not hold any prior knowl-
edge about the confounding concepts within the dataset.
This makes our problem set-up more realistic and practi-
cal than the existing concept-based explanation or inter-
action [19, 33] framework where fully-supervised concept
labels are required. (3) For method generalization , our
method can be applied on top of any deep models.
Therefore, we propose a method that is capable of
learning confounding concepts with easily obtained con-
cept examplars. This is realized via clustering model’s
co-occurrence behavior based on spectral relevance anal-
ysis [20] and concept learning based on concept activation
vector (CA Vs) [18]. Also, a concept space learned by CA Vs
is more explainable than the feature-based counterparts.
To the end, we propose a human-in-the-loop framework
that can effectively discover and remove the confounding
behaviors of the classifier by transforming its feature rep-
resentation into explainable concept scores. Then, humans
are allowed to intervene based on first-order logic. Through
human interaction (see Fig. 1) on the model’s concept space,
people can directly provide feedback to the model training
(feedback turned into gradients) and remove unwanted bias
and confounding behaviors. Moreover, we notice that no
suitable dermatology datasets are available for confounding
behavior analysis. To increase the methods’ reproducibil-
ity and data accessibility in this field, we curated a well-
controlled dataset called ConfDerm containing 3576 im-
ages based on well-known ISIC2019 andISIC2020 datasets.
Within the training set, all images in one of the classes are
confounded by one of five confounding factors, including
dark corners, borders, rulers, air pockets, and hairs. Still, in
the testing set, all images are random. Some confounding
factors within the dataset are illustrated in Fig. 2.
We summarize our main contributions as: (1) We
crafted a novel dataset called ConfDerm, which is the first
skin lesion dataset used for systematically evaluating the
model’s trustworthiness under different confounding behav-
iors within the training set. (2) Our new spectral rele-
vance analysis algorithm on the popular skin cancer dataset
ISIC2019-2020 has revealed insights that artifacts such as
dark corners, rulers, and hairs can significantly confound
modern deep neural networks. (3) We developed a human-
in-the-loop framework using concept mapping and logic
(a) (b) (c) (d) (e)
Figure 2. Observed confounding concepts in ISIC2019-2020
datasets, the top row shows sample images, and the bottom row
is the corresponding heatmap from GradCAM: (a) dark corners.
(b) rulers. (c) dark borders. (e) dense hairs. (f) air pockets.
layer rewriting to make the model self-explainable and
enable users effectively remove the model’s confounding
behaviors. (4) Experimental results on different datasets
demonstrate the superiority of our method on performance
improvement, artifact removal, and skin tone debiasing.
|
Zhang_Ingredient-Oriented_Multi-Degradation_Learning_for_Image_Restoration_CVPR_2023 | Abstract
Learning to leverage the relationship among diverse im-
age restoration tasks is quite beneficial for unraveling the
intrinsic ingredients behind the degradation. Recent years
have witnessed the flourish of various All-in-one methods,
which handle multiple image degradations within a single
model. In practice, however, few attempts have been made
to excavate task correlations in that exploring the underly-
ing fundamental ingredients of various image degradations,
resulting in poor scalability as more tasks are involved. In
this paper, we propose a novel perspective to delve into the
degradation via an ingredients-oriented rather than pre-
vious task-oriented manner for scalable learning. Specif-
ically, our method, named Ingredients-oriented Degrada-
tion Reformulation framework (IDR), consists of two stages,
namely task-oriented knowledge collection and ingredients-
oriented knowledge integration. In the first stage, we con-
duct ad hoc operations on different degradations according
to the underlying physics principles, and establish the cor-
responding prior hubs for each type of degradation. While
the second stage progressively reformulates the preceding
task-oriented hubs into single ingredients-oriented hub via
learnable Principal Component Analysis (PCA), and em-
ploys a dynamic routing mechanism for probabilistic un-
known degradation removal. Extensive experiments on var-
ious image restoration tasks demonstrate the effectiveness
and scalability of our method. More importantly, our IDR
exhibits the favorable generalization ability to unknown
downstream tasks.
| 1. Introduction
Image restoration aims to recover the high-quality im-
ages from their degraded observations, which is a general
term of a series of low-level vision tasks. In addition to
achieving satisfactory visual effects in photography, image
restoration is also widely used in many other real world
scenarios, such as autopilot and surveillance. Complex
*Both authors contributed equally to this research.
†Corresponding author.
Figure 1. An illustration of our proposed ingredients-oriented
degradation reformulation principle. Instead of previous task-
oriented paradigm where each tasks are learned exclusively, we
perform an ingredients-oriented paradigm to explore the correla-
tion among diverse restoration tasks for scaleable degradation re-
formulated learning, where the Conv. ,Add. andMul. means the
convolution, addition and multiplication.
environments put forward higher requirements for image
restoration algorithms, when considering the variability and
unknowability of the corruption types. Since most exist-
ing methods have been dedicated into single degradation re-
moval, such as denoising [15,24,61], deraining [20,52,55],
debluring [8, 40, 42], dehazing [26, 44, 45], low-light en-
hancement [14, 34, 50], etc., which do not satisfy the ap-
plications in real world scenarios.
Recently, all-in-one fashion methods have been com-
ing to the fore, which handle multiple image degradations
within a single model. These methods can be roughly
categorized into two families, i.e., corruption-specific and
corruption-agnostic. Representative studies of the former
[2, 28] deal with different degradations via separate sub-
networks, which demands pre-specification of corruption
types, limiting the scope of further application. While the
efforts in latter [25, 47] release the model from the prior
of the corruption types, improving the flexibility in prac-
tice. However, both of them suffer from poor scalability as
more tasks are involved, indicating that the diverse degrada-
tions are learned exclusively under the potential capability
bottleneck, without touching the intrinsic correlation among
them, which we referred as task-oriented paradigm.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5825
To solve the above problem, we ask two questions: i)
’whether there are commonalities between different degra-
dations?’ During past decades, few of works have been
devoted to this field, [10] presented the interrelationship
between image dehazing and low-light image enhancement.
Going a step further, we envision that such association are
widespread in various degradations, such as directionality
in deblurring and deraining, unnatural image layering in
deraining and denoising. Therefore, it is of great interest
to consider the correlation among various restoration tasks
for learning the intrinsic ingredients behind the degrada-
tion, which we referred as ingredient-oriented paradigm. ii)
’Whether an corrupted image definitely ascribed to only one
type of degradation?’ In real world scenarios, it is hard
to determine as multiple degradations may occur simul-
taneously, such as heavy rain typically accumulated with
mist, or low-light combined with blur in night-time surveil-
lance [63]. Therefore, it is inappropriate to learn each
restoration task exclusively.
In this paper, we propose Ingredients-oriented Degrada-
tion Reformulation framework (IDR) for all-in-one image
restoration, which provides a novel perspective via delving
into the degradation and concentrating on the underlying
fundamental ingredients. Specifically, the learning proce-
dure of IDR consists of two stages, namely task-oriented
knowledge collection and ingredients-oriented knowledge
integration. We perform the above reformulation in the
meta prior learning module (MPL) with the collaboration
of both degradation representation and degradation opera-
tion, while the backbone network can be any transformer-
based architecture. In the first stage, we conduct ad hoc
operations for different degradations depending on the un-
derlying physics principles, which pre-embedding the pri-
ors of disparate physics characteristics respectively. Mean-
while, separate task-oriented prior hubs are established for
each type of degradation, responsible for excavating the
specific degradation ingredients for compositional represen-
tation. While the second stage progressively reformulates
the proceeding task-oriented hubs into single ingredients-
oriented hub via learnable Principal Component Analysis
(PCA), striving for commonalities among multiple degrada-
tions in terms of the ingredient-level, while preserving re-
spective variance information as much as possible. Besides,
a dynamic soft routing mechanism is employed in MPL for
probabilistic unknown1degradation removal, according to
the operation priors embedded in the first stage.
The contributions of this work are summarized as below:
• We rethink the current paradigm of all-in-one fashion
methods, and propose to delve into the degradation for
intrinsic ingredients excavation, in that improving the
scalability of the model.
1Namely, the degradation types are not available in the second stage.• We propose the Ingredients-oriented Degradation Re-
formulation framework (IDR) for image restoration,
which consists of two stages, i.e., task-oriented knowl-
edge collection and ingredients-oriented knowledge
integration, collaborating on both degradation repre-
sentation and degradation operation.
• Extensive experiments are conducted to verify the ef-
fectiveness of our method. As far as we know, IDR is
the first work to perform up to five image restoration
tasks in an all-in-one fashion.
|
Yang_POEM_Reconstructing_Hand_in_a_Point_Embedded_Multi-View_Stereo_CVPR_2023 | Abstract
Enable neural networks to capture 3D geometrical-
aware features is essential in multi-view based vision tasks.
Previous methods usually encode the 3D information of
multi-view stereo into the 2D features. In contrast, we
present a novel method, named POEM, that directly oper-
ates on the 3D POintsEmbedded in the Multi-view stereo
for reconstructing hand mesh in it. Point is a natural form
of 3D information and an ideal medium for fusing fea-
tures across views, as it has different projections on dif-
ferent views. Our method is thus in light of a simple yet
effective idea, that a complex 3D hand mesh can be rep-
resented by a set of 3D points that 1) are embedded in
the multi-view stereo, 2) carry features from the multi-view
images, and 3) encircle the hand. To leverage the power
of points, we design two operations: point-based feature
fusion and cross-set point attention mechanism. Evalua-
tion on three challenging multi-view datasets shows that
POEM outperforms the state-of-the-art in hand mesh re-
construction. Code and models are available for research
atgithub.com/lixiny/POEM
| 1. Introduction
Hand mesh reconstruction plays a central role in the field
of augmented and mixed reality, as it can not only deliver
realistic experiences for the users in gaming but also sup-
port applications involving teleoperation, communication,
education, and fitness outside of gaming. Many significant
efforts have been made for the monocular 3D hand mesh
reconstruction [1, 5, 7, 9, 31, 32]. However, it still strug-
gles to produce applicable results, mainly for these three
reasons. (1)Depth ambiguity. Recovery of the absolute
position in a monocular camera system is an ill-posed prob-
lem. Hence, previous methods [9, 31, 54] only recovered
the hand vertices relative to the wrist ( i.e. root-relative).
(2)Unknown perspectives. The shape of the hand’s 2D
†Cewu Lu is the corresponding author, the member of Qing Yuan Re-
search Institute and MoE Key Lab of Artificial Intelligence, AI Institute,
Shanghai Jiao Tong University, China and Shanghai Qi Zhi institute.
Figure 1. Intersection area of Ncameras’ frustum spaces. The
gray dots represent the point cloud Paggregated from Nfrustums.
Our method: POEM, standing for the point embedded multi-view
stereo, focuses on the dark area scatted with gray dots.
projection is highly dependent on the camera’s perspec-
tive model ( i.e. camera intrinsic matrix). However, the
monocular-based methods usually suggest a weak perspec-
tive projection [1, 27], which is not accurate enough to re-
cover the hand’s 3D structure. (3)Occlusion. The occlu-
sion between the hand and its interacting objects also chal-
lenges the accuracy of the reconstruction [32]. These issues
limit monocular-based methods from practical application,
in which the absolute and accurate position of the hand sur-
face is required for interacting with our surroundings.
Our paper is thus focusing on reconstructing hands from
multi-view images. Motivation comes from two aspects.
First, the issues mentioned above can be alleviated by lever-
aging the geometrical consistency among multi-view im-
ages. Second, the prospered multi-view hand-object track-
ing setups [2, 4, 49, 55] and VR headsets bring us an urgent
demand and direct application of multi-view hand recon-
struction in real-time. A common practice of multi-view 3D
pose estimation follows a two-stage design. It first estimates
the 2D key points of the skeleton in each view and then
back-project them to 3D space through several 2D-to-3D
lifting methods, e.g. algebraic triangulation [17,18,39], Pic-
torial Structures Model (PSM) [33, 38], 3D CNN [18, 43],
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21108
plane sweep [26], etc. However, these two-stage methods
are not capable of reconstructing an animatable hand mesh
that contains both skeleton and surface. It was not until re-
cently that a one-stage multi-view mesh regression model
was proposed [45].
How to effectively fuse the features from different im-
ages is a key component in the multi-view setting. Ac-
cordingly, previous methods can be categorized into three
types. (1) Fusing in 2D. The features are directly fused
in the 2D domain using explicit epipolar transform [17, 38]
or implicit representations that encode the camera transfor-
mation ( i.e. camera intrinsic and extrinsic matrix) into 2D
features, e.g. feature transform layer (FTL) [14, 39] and 3D
position embedding (RayConv) [45]; (2) Fusing in 3D. The
features are fused in a 3D voxel space via PSM [33, 38] or
3D CNNs [18, 43]; (3) Fusing via 3D-2D projection. The
features are fused by first projecting the 3D keypoints’ ini-
tial guess into each 2D plane and then fusing multi-view
features near those 2D locations [45];
The fusion mode in type 1 is considered as holistic ,
since it indiscriminately fuses all the features from differ-
ent views. Consequentially, it ignores the structure of the
underlying hand model that we are interested in. On the
contrary, the fusion mode in type 3 is considered as lo-
cal. However, only the features around the 2D keypoints
are hard to capture the consistent geometrical features from
a global view. Besides, the 3D keypoints initial guess may
not be accurate enough, resulting in the fusion being unsta-
ble. The fusion mode in type 2 is not in our consideration
as it tends to be computationally expensive and suffers from
quantization error.
Based on the above discussion, we aim to seek a fea-
ture representation and a fusion mode between type 1 and
type 3 for both holistically and locally fusing the features
in multi-views, and to explore a framework for robust and
accurate hand mesh reconstruction. Our method is called
POEM, standing for POintEmbedded Multi-view Stereo .
We draw inspiration from the Basis Point Set (BPS) [34],
which bases on a simple yet effective idea that a complex
3D shape can be represented by a fixed set of points (BPS)
that wraps the shape in it. If we consider the intersection of
different cameras’ frustum spaces as a point cloud, and the
hand’s vertices as another point cloud, then the intersected
space is the basis point set for hand vertices (see Fig. 1).
Once we assign the multi-view image features to the point
cloud in the intersected space, fusing image features across
different views becomes fusing the point features from dif-
ferent camera frustums. The advantages of this representa-
tion are two-fold: (i)The hand is wrapped in a dense point
cloud (set) that carries dense image features collected from
different views, which is more holistic and robust than the
local fusion mode in type 3. (ii)For each vertex on the hand
surface, it interacts with basis points in its local neighbor-hood ( i.e.knearest neighbor), which is more selective than
the holistic fusion mode in type 1.
Fig. 2 shows our model’s architecture. POEM consists
of two stages. In the first stage (Sec. 3.2), POEM takes the
multi-view images as input and predicts the 2D keypoints
of the hand skeleton in each view. Then, the 3D keypoints
are recovered by an algebraic triangulation module. In the
second stage, POEM fuses the features from different views
in a space embedded by points and predicts the hand mesh
in this space (Sec. 3.3). The point feature on hand vertices
will iteratively interact with the features of the embedded
points through a cross-set attention mechanism, and the up-
dated vertex features are further used by POEM to predict
the vertex’s refined position (Sec. 3.3.3).
We conduct extensive experiments on three multi-view
datasets for hand mesh reconstruction under the object’s oc-
clusion, namely HO3D [12], DexYCB [4], and OakInk [49].
With the proposed fusion mode and attention mechanism,
POEM achieves state-of-the-art on all three datasets.
Our contributions are in three-fold:
• We investigate the multi-view pose and shape recon-
struction problem from a new perspective, that is, the
interaction between a target point set ( i.e. mesh vertices)
and a basis point set ( i.e. point cloud in the camera frus-
tum spaces).
• According to that, we propose an end-to-end learning
framework: POEM for reconstructing hand mesh from
multi-view images through a point embedded multi-view
stereo. To encourage interaction between two point sets,
POEM introduces two new operations: a point-based
feature fusion strategy and a cross-set point attention.
• We conduct extensive experiments to demonstrate the
efficacy of the architecture in POEM. As a regression
model targeting mesh reconstruction, POEM achieves
significant improvement compared to the previous state-
of-the-art.
|
Yuan_Bi3D_Bi-Domain_Active_Learning_for_Cross-Domain_3D_Object_Detection_CVPR_2023 | Abstract
Unsupervised Domain Adaptation (UDA) technique has
been explored in 3D cross-domain tasks recently. Though
preliminary progress has been made, the performance gap
between the UDA-based 3D model and the supervised
one trained with fully annotated target domain is stilllarge. This motivates us to consider selecting partial-yet-
important target data and labeling them at a minimum cost,to achieve a good trade-off between high performance andlow annotation cost. To this end, we propose a Bi-domainactive learning approach, namely Bi3D, to solve the cross-domain 3D object detection task. The Bi3D first develops adomainness-aware source sampling strategy, which identi-fies target-domain-like samples from the source domain toavoid the model being interfered by irrelevant source data.Then a diversity-based target sampling strategy is devel-
oped, which selects the most informative subset of target do-main to improve the model adaptability to the target domain
using as little annotation budget as possible. Experimentsare conducted on typical cross-domain adaptation scenar-
ios including cross-LiDAR-beam, cross-country, and cross-
sensor , where Bi3D achieves a promising target-domain de-
tection accuracy ( 89.63% on KITTI) compared with UDA-
based work ( 84.29% ), even surpassing the detector trained
on the full set of the labeled target domain ( 88.98% ). Our
code is available at: https://github.com/PJLab-
ADG/3DTrans .
| 1. Introduction
LiDAR-based 3D Object Detection (3DOD) [ 5,13,26,
28,40] has advanced a lot recently. However, the gen-
eralization of a well-trained 3DOD model from a sourcepoint cloud dataset (domain) to another one, namely cross-
∗This work was done when Jiakang Y uan was an intern at Shanghai
AI Laboratory.
†Corresponding to: Tao Chen ([email protected]), Bo Zhang
([email protected])(a) General 3D Object Detection2D BackboneRPNDetector Head
Map to BEVTrain
(b) Self-training based UDA for 3D
Object Detection
Detector
Pseudo LabelPseudo LabelPredicted
Bounding BoxesPredicted
Bounding Boxes
Filtering low confidence
bounding boxesNMSData Augmentation
Train3D backbone BEV features
(c) ADA for 3D Object Detection
Detector
Selected representative
unlabel target framesInstance-level
features
Scene-level
features
AnnotateTrainSource Target
OracleInstance categories
Bounding boxes
Active sampling
criteria(a) General 3D Object Detection2D Ba ckboneRPNDetector HeadMap to BE V
Train
3D backbone BEV features(a) General 3D Object Detection2D BackboneRPNDetector Head
Map to BEV
Train
(b) Self -fftraining based UDA for 3D
Object Detection
Detector
Pseudo LabelPredicted
Bounding Boxe s
Filtering low confidence
bounding boxesNMS
Data Augmentatio n
Train
Instance categorie s
Bounding boxe s(b) Self-training based UDA for 3D
Object Detection
Detector
Pseudo LabelPredicted
Bounding Boxes
Filtering low confidence
bounding boxesNMS
Data Augmentation
Train3D backbone BEV features
(c) ADA for 3D Object Detection
Detector
Selected representativ e
unlabel target frame s
Instance-level
features
Scene-level
features
AnnotateTrain
Source Target
OracleActive sampling
criteria(c) ADA for 3D Object Detection
Detector
Selected representative
unlabel target frames
Instance-level
features
Scene-level
features
AnnotateTrain
Source Target
OracleInstance categories
Bounding boxes
Active sampling
criteria
Figure 1. Comparisons among (a) The general 3DOD pipeline,
(b) Self-training based Unsupervised Domain Adaptation 3DODpipeline, and (c) Active Domain Adaptation 3DOD pipeline that
selects representative target data, and then annotates them by an
oracle (human expert) for subsequent model refinement.
domain 3DOD, is still under-explored. Such a task in fact
is important in many real-world applications. For example,in the autonomous driving scenario, the target scene distri-bution frequently changes due to unforeseen differences indynamically changing environments, making cross-domain3DOD an urgent problem to be resolved.
Benefiting from the success of Unsupervised Domain
Adaptation (UDA) technique in 2D cross-domain tasks [ 3,
7,10,14,32,45,48], several attempts are made to apply UDA
for tackling 3D cross-domain tasks [ 15,20,22,36,39,42,46].
ST3D [ 42] designs a self-training-based framework to adapt
a pre-trained detector from the source domain to a new tar-get domain. LiDAR distillation [ 36] exploits transferable
knowledge learned from high-beam LiDAR data to the low
one. Although these UDA 3D models have achieved sig-nificant performance gains for the cross-domain task, there
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15599
is still a large performance gap between these UDA models
and the supervised ones trained using a fully-annotated tar-
get domain. For example, ST3D [ 42] only achieves 72.94%
AP 3Din nuScenes [ 1]-to-KITTI [ 8] cross-domain setting,
yet the fully-supervised result using the same baseline de-
tector can reach to 82.50% AP 3Don KITTI.
To further reduce the detection performance gap between
UDA-based 3D models and the fully-supervised ones, aninitial attempt is to leverage Active Domain Adaptation(ADA) technique [ 6,17,29,37,38], whose goal is to se-
lect a subset quota of all unlabeled samples from the targetdomain to perform the manual annotation for model train-ing. Actually, the ADA task has been explored in 2D vi-
sion fields such as AADA [ 29], TQS [ 6], and CLUE [ 17],
but its research on 3D point cloud data still remains blank.
In order to verify the versatility of 2D image-based ADAmethods towards 3D point cloud, we conduct extensive at-
tempts by integrating the recently proposed ADA methods,e.g., TQS [ 6] and CLUE [ 17], into several typical 3D base-
line detectors, e.g., PV-RCNN [ 26] and V oxel R-CNN [ 5].
Results show that these 2D ADA methods cannot obtain sat-
isfactory detection accuracy under the 3D scene’s domaindiscrepancies. For example, PV-RCNN coupled with TQS
only achieves 75.40% AP
3D, which largely falls behind the
fully-supervised result 82.50% AP 3D.
As a result, directly selecting a subset of given 3D
frames using 2D ADA methods to tackle 3D scene’s do-
main discrepancies is challenging, which can be attributedto the following reasons. (1) The sparsity of the 3D point
clouds leads to huge inter-domain discrepancies that harm
the discriminability of domain-related features. (2) Theintra-domain feature variations are widespread within the
source domain, which enlarges the differentiation betweenthe selected target domain samples and the entire source do-main samples, bringing negative transfer to the model adap-tation on the target domain.
To this end, we propose a Bi-domain active learning
(Bi3D) framework to conduct the active learning for the
3D point clouds. To tackle the problem of sparsity ,w e
design a foreground region-aware discriminator, which ex-ploits an RPN-based attention enhancement to derive aforeground-related domainness metric, that can be regarded
as an important proxy for active sampling strategy. To
address the problem of intra-domain feature variations
within the source domain, we conceive a Bi-domain sam-
pling approach, where Bi-domain means that data from bothsource and target domains are picked up for safe and robust
model adaptation. Specifically, the Bi3D is composed of adomainness-aware source sampling strategy and a diversity-
based target sampling strategy. The source sampling strat-
egy aims to select target-domain-like samples from the
source domain, by judging the corresponding domainnessscore of each given source sample. Then, the target sam-pling strategy is utilized to select diverse but representative
data from the target domain by dynamically maintaining a
similarity bank. Finally, we employ the sampled data fromboth domains to adapt the source pre-trained detector on anew target domain at a low annotation cost.
The main contributions can be summarized as follows:
1. From a new perspective of chasing high performance
at a low cost, we explore the possibilities of leverag-
ing active learning to achieve effective 3D scene-levelcross-domain object detection.
2. A Bi-domain active sampling approach is proposed,
consisting of a domainness-aware source samplingstrategy and a diversity-based target sampling strat-
egy to identify the most informative samples from both
source and target domains, boosting the model’s adap-tation performance.
3. Experiments show that Bi3D outperforms state-of-the-
art UDA works with only 1% target annotation bud-
get for cross-domain 3DOD. Moreover, Bi3D achieves89.63% AP
BEV in the nuScenes-to-KITTI scenario,
surpassing the fully supervised result ( 88.98% AP BEV)
on the KITTI dataset.
|
Yong_A_General_Regret_Bound_of_Preconditioned_Gradient_Method_for_DNN_CVPR_2023 | Abstract
While adaptive learning rate methods, such as Adam,
have achieved remarkable improvement in optimizing Deep
Neural Networks (DNNs), they consider only the diago-
nal elements of the full preconditioned matrix. Though the
full-matrix preconditioned gradient methods theoretically
have a lower regret bound, they are impractical for use to
train DNNs because of the high complexity. In this paper,
we present a general regret bound with a constrained full-
matrix preconditioned gradient, and show that the updat-
ing formula of the preconditioner can be derived by solving
a cone-constrained optimization problem. With the block-
diagonal and Kronecker-factorized constraints, a specific
guide function can be obtained. By minimizing the up-
per bound of the guide function, we develop a new DNN
optimizer, termed AdaBK. A series of techniques, includ-
ing statistics updating, dampening, efficient matrix inverse
root computation, and gradient amplitude preservation, are
developed to make AdaBK effective and efficient to imple-
ment. The proposed AdaBK can be readily embedded into
many existing DNN optimizers, e.g., SGDM and AdamW,
and the corresponding SGDM BK and AdamW BK algo-
rithms demonstrate significant improvements over existing
DNN optimizers on benchmark vision tasks, including im-
age classification, object detection and segmentation. The
code is publicly available at https://github.com/
Yonghongwei/AdaBK .
| 1. Introduction
Stochastic gradient descent (SGD) [26] and its vari-
ants [21, 23], which update the parameters along the oppo-
site of their gradient directions, have achieved great success
in optimizing deep neural networks (DNNs) [14, 24]. In-
stead of using a uniform learning rate for different parame-
ters, Duchi et al. [5] proposed the AdaGrad method, which
adopts an adaptive learning rate for each parameter, and
proved that AdaGrad can achieve lower regret bound than
SGD. Following AdaGrad, a class of adaptive learning rate
gradient descent methods has been proposed. For example,RMSProp [30] and AdaDelta [35] introduce the exponential
moving average to replace the sum of second-order statis-
tics of the gradient for computing the adaptive learning rate.
Adam [15] further adopts the momentum into the gradient,
and AdamW [22] employs a weight-decoupled strategy to
improve the generalization performance. RAdam [18], Ad-
abelief [38] and Ranger [19,32,37] are proposed to acceler-
ate training and improve the generalization capability over
Adam. The adaptive learning rate methods have become the
mainstream DNN optimizers.
In addition to AdaGrad, Duchi et al. [5] provided a full-
matrix preconditioned gradient descent (PGD) method that
adopts the matrix HT= (PT
t=1gtg⊤
t)1
2to adjust the gra-
dientgT, where tdenotes the iteration number and Tis
the number of the current iteration. It has been proved
[5] that the preconditioned gradient H−1
TgThas a lower
regret bound than the adaptive learning rate methods that
only consider the diagonal elements of HT. However,
the full-matrix preconditioned gradient is impractical to
use due to its high dimension, which limits its applica-
tion to DNN optimization. Various works have been re-
ported to solve this problem in parameter space by adding
some structural constraints on the full-matrix HT. For in-
stances, GGT [1] stores only the gradients of recent itera-
tions so that the matrix inverse root can be computed effi-
ciently by fast low-rank computation tricks. Yun et al. [34]
proposed a mini-block diagonal matrix framework to re-
duce the cost through coordinate partitioning and grouping
strategies. Gupta et al . [9] proposed to extend AdaGrad
with Kronecker products of full-matrix preconditioners to
make it more efficient in DNN training. Besides, natural
gradient approaches [6, 7], which adopt the approximations
of the Fisher matrix to correct the descent direction, can also
be regarded as full-matrix preconditioners.
The existing constrained PGD (CPGD) methods, how-
ever, are heuristic since manually designed approximations
to the full matrix HTare employed in them, while their in-
fluence on the regret bound is unknown. By far, they lack a
general regret-bound theory that can guide us to design the
full-matrix preconditioned gradient methods. On the other
hand, the practicality and effectiveness of these precondi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7866
tioner methods are also an issue, which prevents them from
being widely used in training DNNs.
To address the above-mentioned issues, in this paper we
present a theorem to connect the regret bound of the con-
strained full-matrix preconditioner with a guide function.
By minimizing the guide function under the constraints, an
updating formula of the preconditioned gradient can be de-
rived. That is, optimizing the guide function of the precon-
ditioner will minimize its regret bound at the same time,
while different constraints can yield different updating for-
mulas. With the commonly-used constraints on DNN pre-
conditioners, such as the block-diagonal and Kronecker-
factorized constraints [7, 9], specific guide functions can be
obtained. By minimizing the upper bound of the guide func-
tion, a new optimizer, namely AdaBK, is derived.
We further propose a series of techniques, including
statistics updating, dampening, efficient matrix inverse root
computation and gradient norm recovery, to make AdaBK
more practical to use for DNN optimization. By embedding
AdaBK into SGDM and AdamW (or Adam), we develop
two new DNN optimizers, SGDM BK and AdamW BK.
With acceptable extra computation and memory cost, they
achieve significant performance gain in convergence speed
and generalization capability over state-of-the-art DNN op-
timizers, as demonstrated in our experiments in image clas-
sification, object detection and segmentation.
For a better understanding of our proposed regret bound
and the developed DNN optimizer, in Fig. 1, we illustrate
the existing major DNN optimizers and their relationships.
SGD and its momentum version (SGDM) apply the same
learning rate to all parameters based on their gradient de-
scent directions. The adaptive learning rate methods as-
sign different learning rates to different parameters by using
second-order information of the gradients, achieving bet-
ter convergence performance. The adaptive learning rate
methods can be viewed as special cases of PGD methods
by considering only the diagonal elements of the full pre-
conditioned matrix of gradients. Our method belongs to the
class of PGD methods, while our proposed general regret
bound of constrained PGD methods can be applied to the
PGD optimizers under different constraints, including Ada-
Grad, Full-Matrix AdaGrad and our AdaBK.
Notation system. We denote by wtandgtthe weight
vector and its gradient of a DNN model in the t-th iteration.
Denote by gt,ithe gradient of the i-th sample in a batch
in the t-th iteration, we have gt=1
nPn
i=1gt,i, where n
is the batch size. The notations A⪰0andA≻0for
a matrix Adenote that Ais symmetric positive semidef-
inite (PSD) and symmetric positive definite, respectively.
A⪰BorA−B⪰0means that A−Bis PSD. Tr (A)
represents the trace of the matrix A. For a PSD matrix A,
Aα=UΣαU⊤, where UΣU⊤is the Singular Value De-
composition (SVD) of A.||x||A=√
x⊤Axis the Maha-
SGD SGDM
AdaGrad RMSProp
Adam Adabelief
Shampoo FM-AdaGrad
KFAC AdaBKStochastic Gradient Descent
Adaptive Learning Rate Method
Preconditioned Gradient Descent
Regret Bound Theorem of CPGD𝒈𝒈
𝑯−1𝒈
𝑯−1𝒈Diagonal Constraint
General ConstraintsFigure 1. Illustration of the main DNN optimizers.
lanobis norm of xinduced by PSD matrix A, and its dual
norm is ||x||∗
A=√
x⊤A−1x.A⊗Bmeans the Kro-
necker product of AandB, while A⊙BandA⊙αare the
element-wise matrix product and element-wise power oper-
ation, respectively. Diag (x)is a diagonal matrix with diag-
onal vector x, and vec (·)denotes the vectorization function.
|
Zhang_PromptCAL_Contrastive_Affinity_Learning_via_Auxiliary_Prompts_for_Generalized_Novel_CVPR_2023 | Abstract
Although existing semi-supervised learning models
achieve remarkable success in learning with unannotated
in-distribution data, they mostly fail to learn on unlabeled
data sampled from novel semantic classes due to their
closed-set assumption. In this work, we target a prag-
matic but under-explored Generalized Novel Category Dis-
covery (GNCD) setting. The GNCD setting aims to cat-
egorize unlabeled training data coming from known and
novel classes by leveraging the information of partially la-
beled known classes. We propose a two-stage Contrastive
Affinity Learning method with auxiliary visual Prompts,
dubbed PromptCAL , to address this challenging problem.
Our approach discovers reliable pairwise sample affini-
ties to learn better semantic clustering of both known and
novel classes for the class token and visual prompts. First,
we propose a discriminative prompt regularization loss to
reinforce semantic discriminativeness of prompt-adapted
pre-trained vision transformer for refined affinity relation-
ships. Besides, we propose contrastive affinity learning
to calibrate semantic representations based on our itera-
tive semi-supervised affinity graph generation method for
semantically-enhanced supervision. Extensive experimen-
tal evaluation demonstrates that our PromptCAL method is
more effective in discovering novel classes even with lim-
ited annotations and surpasses the current state-of-the-art
on generic and fine-grained benchmarks ( e.g., with nearly
11% gain on CUB-200, and 9%on ImageNet-100) on over-
all accuracy. Our code is available at https : / / github .
c om/ sheng- e atamath/ Pr omptCAL .
| 1. Introduction
The deep neural networks have demonstrated favorable
performance in the Semi-Supervised Learning (SSL) set-
ting [15,29,40,46,49]. Some recent works can even achieve
comparable performance to their fully-supervised counter-
Figure 1. PromptCAL Overview. In contrast to previous method
based on semi-supervised contrastive learning, PromptCAL con-
structs affinity graph on-the-fly to guide representation learning
of the class token and prompts. Meanwhile, our prompt-adapted
backbone can be tuned to enhance semantic discriminativeness.
PromptCAL can discover reliable affinities from a memory bank,
especially for novel classes. Therefore, our PromptCAL is better
task-aligned and discriminative to novel semantic information.
parts using few annotations for image recognition [3,38,46].
However, these approaches heavily rely on the closed-world
assumption that unlabeled data share the same underlying
class label space as the labeled data [10, 47]. In many real-
istic scenarios, this assumption does not hold true because
of the inherent dynamism of real-world tasks where novel
classes can emerge in addition to known classes.
In contrast to SSL, the Novel Category Discovery (NCD)
problem was introduced by [12] to relax the closed-world
assumption of SSL, which assumes the unlabeled data con-
tain novel classes. Recently, the nascent Generalized Novel
Category Discovery (GNCD) problem, first proposed in
[4, 41], extends NCD and assumes the unlabeled data can
contain both known and novel classes, which is more prag-
matic and challenging. To be more specific, GNCD intends
to categorize images sampled from predefined categories
in the training set comprising labeled-knowns, unlabeled-
knowns, and unlabeled-novels.
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3479
Our work focuses on GNCD problem. The key challenge
of GNCD is to discriminate among novel classes when only
the ground truths of known classes are accessible in the
training set. Recent studies show that self-supervised pre-
trained representations are conducive to discovering novel
semantics [4, 5, 11, 41, 53]. A typical work on GNCD [41]
takes advantage of the large-scale pre-trained visual trans-
former (ViT) [37], and learns robust clusters for known and
novel classes through semi-supervised contrastive learning
on downstream datasets. However, we discover that the re-
markable potential of pre-trained ViT is actually suppressed
by this practice, due to the class collision [52] issue induced
by abundant false negatives in contrastive loss, i.e., consid-
ering different unlabeled images from the same or similar
semantic class as false negatives. As supported by empiri-
cal studies, abundant false negatives in contrastive training
can deteriorate the compactness and purity of semantic clus-
tering [5, 16, 21, 52]. Based on empirical investigation, we
show that this issue is particularly severe in category discov-
ery. Furthermore, although the existing commonly adopted
practice [4, 41] of freezing most parts of the pre-trained
backbone can alleviate overfitting on known classes, it con-
strains the flexibility and adaptability of backbones [18].
Lack of adaptability inhibits models from learning discrim-
inative semantic information on downstream datasets.
To address above limitations and learn better semanti-
cally discriminative representations, we propose Prompt -
based Contrastive Affinity Learning ( PromptCAL ) frame-
work to tackle GNCD problem. To be specific, our ap-
proach aims to discover semantic clusters in unlabeled data
by simultaneous semantic prompt learning based on our
Discriminative Prompt Regularization (DPR) loss and rep-
resentation calibration based on our Contrastive Affinity
Learning (CAL) process. Firstly , CAL discovers abundant
reliable pseudo positives for DPR loss and contrastive loss
based on generated affinity graphs. These semantic-aware
pseudo labels further enhance the semantic discriminative-
ness of DPR supervision. Secondly , DPR regularizes se-
mantic representations of ensembled prompts, which facil-
itates the discovery of more accurate pseudo labels at the
next-step of CAL. Therefore, as model and prompt repre-
sentations are iteratively enhanced, we can obtain higher
quality pseudo positives for further self-training as well as
acquire better semantic clustering.
Our PromptCAL achieves State-Of-The-Art (SOTA)
performance in extensive experimental evaluation on six
benchmarks. Specifically, PromptCAL remarkably sur-
passes previous SOTA by more than 10% clustering ac-
curacy on the fine-grained CUB-200 and StandfordCars
datasets; it also significantly outperforms previous SoTAs
by nearly 4%on ImageNet-100 and 7%on CIFAR-100. In-
terestingly, we identify that both DPR supervised prompts
and unsupervised prompts of PromptCAL can learn se-mantic discriminativeness, which advances the flexibility of
the pre-trained backbone. Furthermore, PromptCAL still
achieves the best performance in challenging low-labeling
and few-class setups.
Our contributions include: (1)We propose a two-stage
framework for the generalized novel category discovery
problem, in which semantic prompt tuning and contrastive
affinity learning mutually reinforce and benefit each other
during the learning process. (2)We propose two syn-
ergistic learning objectives, contrastive affinity loss and
discriminative prompt regularization loss, based on our
semi-supervised adapted affinity graphs to enhance seman-
tic discriminativeness. (3)We comprehensively evaluate
our method on three generic ( i.e., CIFAR-10, CIFAR-100,
and ImageNet-100) and three fine-grained benchmarks ( i.e.,
CUB-200, Aircraft, and StandfordCars), achieving state-of-
the-art performance, thereby showing its effectiveness. (4)
We further showcase generalization ability of PromptCAL
and its effectiveness in more challenging low-labeling and
few-class setups.
|
Zhang_Semi-DETR_Semi-Supervised_Object_Detection_With_Detection_Transformers_CVPR_2023 | Abstract
We analyze the DETR-based framework on semi-
supervised object detection (SSOD) and observe that (1) the
one-to-one assignment strategy generates incorrect match-
ing when the pseudo ground-truth bounding box is inaccu-
rate, leading to training inefficiency; (2) DETR-based de-
tectors lack deterministic correspondence between the in-
put query and its prediction output, which hinders the ap-
plicability of the consistency-based regularization widely
used in current SSOD methods. We present Semi-DETR,
the first transformer-based end-to-end semi-supervised ob-
ject detector, to tackle these problems. Specifically, we
propose a Stage-wise Hybrid Matching strategy that com-
bines the one-to-many assignment and one-to-one assign-
ment strategies to improve the training efficiency of the first
stage and thus provide high-quality pseudo labels for the
training of the second stage. Besides, we introduce a Cross-
view Query Consistency method to learn the semantic fea-
ture invariance of object queries from different views while
avoiding the need to find deterministic query correspon-
dence. Furthermore, we propose a Cost-based Pseudo La-
bel Mining module to dynamically mine more pseudo boxes
based on the matching cost of pseudo ground truth bound-
ing boxes for consistency training. Extensive experiments
on all SSOD settings of both COCO and Pascal VOC bench-
mark datasets show that our Semi-DETR method outper-
forms all state-of-the-art methods by clear margins.
| 1. Introduction
Semi-supervised object detection (SSOD) aims to boost
the performance of a fully-supervised object detector by ex-
*Equally-contributed authors. Work done during an internship at
Baidu.
†Corresponding author.
SupervisionWeakAug
StrongAugBipartiteMatching
Stage-wiseHybridMatchingEMA…………DETR(a)DETR-SSODvanilla(b)Semi-DETRDETR……TeacherDETR
……TeacherDETR
StudentCross-viewqueryConsistencyWeakAug
StrongAugEMA
SupervisionStudentFigure 1. Comparisons between the vanilla DETR-SSOD frame-
work based on the Teacher-Student architecture and our proposed
Semi-DETR framework. Semi-DETR consists of Stage-wise Hy-
brid Matching, Cross-view Query Consistency powered by a cost-
based pseudo label mining strategy.
ploiting a large amount of unlabeled data. Current state-of-
the-art SSOD methods are primarily based on object detec-
tors with many hand-crafted components, e.g., rule-based
label assigner [9,26,27,31] and non-maximum suppression
(NMS) [1] post-processing. We term this type of object de-
tector as a traditional object detector. Recently, DETR [2],
a simple transformer-based end-to-end object detector, has
received growing attention. Generally, the DETR-based
framework builds upon transformer [32] encoder-decoder
architecture and generates unique predictions by enforcing a
set-based global loss via bipartite matching during training.
It eliminates the need for various hand-crafted components,
achieving state-of-the-art performance in fully-supervised
object detection. Although the performance is desirable,
how to design a feasible DETR-based SSOD framework re-
mains under-explored. There are still no systematic ways to
fulfill this research gap.
Designing an SSOD framework for DETR-based de-
tectors is non-trivial. Concretely, DETR-based detectors
take a one-to-one assignment strategy where the bipartite-
matching algorithm forces each ground-truth (GT) bound-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23809
1% 5% 10% 100%
Proportion of used labeled data152025303540455055COCO mAP
22.432.536.046.1
22.333.037.146.1
30.540.143.551.8 PseCo(Faster RCNN)
Dense Teacher(FCOS)
Semi-DETR(DINO)Figure 2. Performance comparisons between the proposed Semi-
DETR and other SSOD methods, including PseCo [16] and Dense-
Teacher [42].
ing box to match a candidate proposal as positive, treating
remains as negatives. It goes well when the ground-truth
bounding boxes are accurate. However, directly integrat-
ing DETR-based framework with SSOD is problematic, as
illustrated in Fig. 1 (a) where a DETR-SSOD vanilla frame-
work utilizes DETR-based detectors to perform pseudo la-
beling on unlabeled images. In the Teacher-Student archi-
tecture, the teacher model usually generates noisy pseudo
bounding boxes on the unlabeled images. When the pseudo
bounding box is inaccurate, the one-to-one assignment
strategy is doomed to match a single inaccurate proposal
as positive, leaving all other potential correct proposals as
negative, thus causing learning inefficiency. As a compar-
ison, the one-to-many assignment strategy adopted in the
traditional object detectors maintains a set of positive pro-
posals, having a higher chance of containing the correct
positive proposal. On the one hand, the one-to-one assign-
ment strategy enjoys the merits of NMS-free end-to-end
detection but suffers the training inefficiency under semi-
supervised scenarios; on the other hand, the one-to-many
assignment strategy obtains candidate proposal set with bet-
ter quality making the detector optimized more efficiently
but inevitably resulted in duplicate predictions. Designing
a DETR-based SSOD framework that embraces these two
merits could bring the performance to the next level.
Additionally, the consistency-based regularization com-
monly used in current SSOD methods becomes infeasible
in DETR-based SSOD. Specifically, current SSOD meth-
ods [3, 10, 13, 16] utilize consistency-based regulariza-
tion to help object detectors learn potential feature invari-
ance by imposing consistency constraints on the outputs
of pairs-wise inputs (such as scale consistency [3, 10, 16],
weak-strong consistency [13], etc.). Since the input fea-
tures are deterministic in traditional object detectors, there
is a one-to-one correspondence between the inputs and out-
puts, which makes the consistency constraint convenientto implement. However, this is not the case in DETR-
based detectors. DETR-based detectors [2, 15, 20, 40, 44]
use randomly initialized learnable object queries as inputs
and constantly update the query features through the atten-
tion mechanism. As the query features update, the corre-
sponding prediction results constantly change, which has
been verified in [15]. In other words, there is no determinis-
tic correspondence between the input object queries and its
output prediction results, which prevents consistency regu-
larization from being applied to DETR-based detectors.
According to the above analysis, we propose a new
DETR-based SSOD framework based on the Teacher-
Student architecture, which we term Semi-DETR presented
in Fig. 1 (b). Concretely, we propose a Stage-wise Hybrid
Matching module that imposes two stages of training us-
ing the one-to-many assignment and the one-to-one assign-
ment, respectively. The first stage aims to improve the train-
ing efficiency via the one-to-many assignment strategy and
thus provide high-quality pseudo labels for the second stage
of one-to-one assignment training. Besides, we introduce
aCross-view Query Consistency module that constructs
cross-view object queries to eliminate the requirement of
finding deterministic correspondence of object queries and
aids the detector in learning semantically invariant charac-
teristics of object queries between two augmented views.
Furthermore, we devise a Cost-based Pseudo Label Min-
ingmodule based on the Gaussian Mixture Model (GMM)
that dynamically mines reliable pseudo boxes for consis-
tency learning according to their matching cost distribution.
Differently, Semi-DETR is tailored for DETR-based frame-
work, which achieves new SOTA performance compared to
the previous best SSOD methods as illustrated in Fig. 2.
To sum up, this paper has the following contributions:
• We present a new DETR-based SSOD method based
on the Teacher-Student architecture, called Semi-
DETR. To our best knowledge, we are the first to ex-
amine the DETR-based detectors on SSOD, and we
identify core issues in integrating DETR-based detec-
tors with the SSOD framework.
• We propose a stage-wise hybrid matching method that
combines the one-to-many assignment and one-to-one
assignment strategies to address the training ineffi-
ciency caused by the inherent one-to-one assignment
within DETR-based detectors when applied to SSOD.
• We introduce a consistency-based regularization
scheme and a cost-based pseudo-label mining algo-
rithm for DETR-based detectors to help learn semantic
feature invariance of object queries from different aug-
mented views.
• Extensive experiments show that our Semi-DETR
method outperforms all previous state-of-the-art meth-
ods by clear margins under various SSOD settings on
both MS COCO and Pascal VOC benchmark datasets.
23810
|
Yu_Efficient_Loss_Function_by_Minimizing_the_Detrimental_Effect_of_Floating-Point_CVPR_2023 | Abstract
Attackers can deceive neural networks by adding hu-
man imperceptive perturbations to their input data; this
reveals the vulnerability and weak robustness of current
deep-learning networks. Many attack techniques have been
proposed to evaluate the model’s robustness. Gradient-
based attacks suffer from severely overestimating the ro-
bustness. This paper identifies that the relative error in cal-
culated gradients caused by floating-point errors, including
floating-point underflow and rounding errors, is a funda-
mental reason why gradient-based attacks fail to accurately
assess the model’s robustness. Although it is hard to elim-
inate the relative error in the gradients, we can control its
effect on the gradient-based attacks. Correspondingly, we
propose an efficient loss function by minimizing the detri-
mental impact of the floating-point errors on the attacks.
Experimental results show that it is more efficient and reli-
able than other loss functions when examined across a wide
range of defence mechanisms.
| 1. Introduction
AI with deep neural networks (DNNs) as the core [24]
has achieved great success in different research directions
and has been widely used in many safety-critical systems,
such as aviation [1, 8, 35], medical diagnosis [27, 49], self-
driving [4,26,47] , etc. However, a severe problem with cur-
rent DNNs is that they are vulnerable to adversarial attacks.
Adding human imperceptive perturbations to input data can
mislead the model to output incorrect results [16, 45]. Such
vulnerability poses a significant security risk to all DNN-
based systems. Therefore, increasing the models’ robust-
ness and providing efficient and reliable evaluation methods
are becoming increasingly urgent and vital.
*Corresponding author.
1https://robustbench.github.io/
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Iterations (k)53.053.554.054.555.0Robust Accuracy (%)MIFPE
MIFPECE
CE target
C&W
C&W target
DLRDLR target
MIFPE
MIFPE target
BestFigure 1. Comparison of the effectiveness of non-targeted and
multi-targeted PGD 100 iterations attacks using various loss func-
tions on the CIFAR-10 dataset. The best results were obtained
from RobustBench1using an ensemble attack with a minimum of
4900 iterations. The defence model is from [59].
Numerous defence strategies [6, 10, 13, 16, 29, 31, 55] to
improve the model’s robustness and many attacks [2, 29,
33, 41, 50, 52, 54, 59] to evaluate these strategies have been
proposed. Currently, the most efficient attack method is
known as white-box attack, where the attacker has com-
plete knowledge of the target defending strategy, includ-
ing its model architecture, parameters, detail of the train-
ing algorithm, dataset, etc. A typical example is Projec-
tive gradient descent (PGD) [29] for evaluating the model’s
robustness. However, studies have shown that PGD with
cross-entropy (CE) loss fails to provide accurate evaluation
results [6, 41] and significantly overestimates the model’s
robustness(Figure 1). To enhance the evaluation accuracy,
recent powerful evaluation methods were proposed to en-
semble diverse attack algorithms [10, 30]. However, their
performance is at the cost of high computational overhead.
The research community attributes the failure of
gradient-based attacks to gradient masking [15]. The core
of gradient masking is that the calculated gradients are not
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4056
necessarily efficient in guiding the generation of adversarial
examples. The causes of inefficient gradients may be related
to floating-point errors or other factors. This paper reveals
that the relative error in the gradients caused by floating-
point errors is one of the fundamental reasons for the failure
of gradient-based attacks, and the relative error is directly
affected by the value of the difference of the f |
Zhang_Class_Relationship_Embedded_Learning_for_Source-Free_Unsupervised_Domain_Adaptation_CVPR_2023 | Abstract
This work focuses on a practical knowledge transfer task
defined as Source-Free Unsupervised Domain Adaptation
(SFUDA), where only a well-trained source model and un-
labeled target data are available. To fully utilize source
knowledge, we propose to transfer the class relationship,
which is domain-invariant but still under-explored in previ-
ous works. To this end, we first regard the classifier weights
of the source model as class prototypes to compute class re-
lationship, and then propose a novel probability-based sim-
ilarity between target-domain samples by embedding the
source-domain class relationship, resulting in Class Rela-
tionship embedded Similarity (CRS). Here the inter-class
term is particularly considered in order to more accurately
represent the similarity between two samples, in which the
source prior of class relationship is utilized by weighting.
Finally, we propose to embed CRS into contrastive learn-
ing in a unified form. Here both class-aware and instance
discrimination contrastive losses are employed, which are
complementary to each other. We combine the proposed
method with existing representative methods to evaluate its
efficacy in multiple SFUDA settings. Extensive experimen-
tal results reveal that our method can achieve state-of-the-
art performance due to the transfer of domain-invariant
class relationship.1
| 1. Introduction
Benefiting from the large amount of labeled training
data, deep neural networks have achieved promising results
in many computer vision tasks [7, 15, 19, 95]. To reduce the
annotation cost, Unsupervised Domain Adaptation (UDA)
has been devised by transferring knowledge from a label-
rich source domain to a label-scarce target domain. Cur-
rently, many UDA methods have been proposed that jointly
*Corresponding author
1Code is available at https://github.com/zhyx12/CRCo
Sample feature Classifier weight
Class 1 Class 2 Class 3
Class- aware contrastive loss
Instance discrimination
contrastive loss
Class relationship embedded contrastive learning
𝒘𝒘𝑖𝑖(𝑖𝑖∈1,2,3)∈ℝD×1: normalized source classifier weight
of i-thclass
𝑆𝑆cr𝒑𝒑,𝒑𝒑′=�
𝑖𝑖=13
𝑝𝑝𝑖𝑖×𝑝𝑝𝑖𝑖′+�
𝑖𝑖,𝑗𝑗=1;𝑖𝑖≠𝑗𝑗3
𝐴𝐴𝑖𝑖𝑗𝑗𝑠𝑠×𝑝𝑝𝑖𝑖×𝑝𝑝𝑗𝑗′
Intra -class similarity Inter-class similarity
Pull together
Push away
Source pre -trained model Our method
𝑨𝑨𝑠𝑠= 𝒘𝒘1𝑻𝑻𝒘𝒘1𝒘𝒘1𝑻𝑻𝒘𝒘2𝒘𝒘1𝑻𝑻𝒘𝒘3
𝒘𝒘2𝑻𝑻𝒘𝒘1𝒘𝒘2𝑻𝑻𝒘𝒘2𝒘𝒘2𝑻𝑻𝒘𝒘3
𝒘𝒘3𝑻𝑻𝒘𝒘1𝒘𝒘3𝑻𝑻𝒘𝒘𝟐𝟐𝒘𝒘3𝑻𝑻𝒘𝒘3=1 0.10.02
0.1 1 0.02
0.020.02 1Figure 1. Illustration of our proposed method. The upper left rep-
resents the target-domain featrue distribution by source pre-trained
model. The upper right is the feature distribution obtained by our
method. The bottom is the process of class relationship embedded
contrastive learning. Best viewed in color.
learn on the source and target data. But they would be
unapplicable for some real-world scenarios involving pri-
vacy ( e.g., medical images, surveillance videos) because the
source-domain data cannot be accessed. Thus, more recent
methods [36, 42, 71, 79, 87, 88] focus on Source-Free Un-
supervised Domain Adaptation (SFUDA). Under this set-
ting, the labeled source data are not accessible any more
when training the target model, but the pre-trained model
in the source domain is provided. Then a natural question
arises, i.e., what knowledge should we transfer to facilitate
the learning of unlabeled target-domain data?
Some methods [21, 36, 42] assume that the source hy-
pothesis ( i.e., classifier [42] or whole model [21, 36]) con-
tains sufficient knowledge for target domain. Then they
transfer source knowledge by directly aligning features with
the fixed source classifier [42], resorting to historical mod-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7619
els [21], or weight regularization [36]. Another line of
works [87, 89, 90] assume the source model already forms
some semantic structure, and then utilize the local infor-
mation ( i.e., neighborhood) of feature space to enforce the
similarity in the output space. Despite these progress, what
knowledge to be transferred remains an open question.
In this work, we propose to transfer the class relation-
ship represented by the similarity of classifier weights. Ac-
tually, the class relationship is domain-invariant [14], since
the same class in the source and target domains essentially
represents the same semantic in spite of domain discrep-
ancy. For example, the computers are always more simi-
lar with the TV than the scissors. Thus, it is reasonable
to guide the target domain learning using class relationship
prior. Unlike previous methods that learn class distribu-
tion by pseudo labeling [21, 42] and local aggregation of
neighborhood predictions [79, 87, 89, 90], here we explic-
itly model the source-domain class relationship. To be spe-
cific, we regard each weight of classifier as the class proto-
type [90], and then compute a class relationship matrix As
by cosine similarity, as shown in Figure 1.
Before explaining how to use this matrix, we illustrate
the purpose of representation learning in the target domain
using Figure 1, where three classes are adopted for clarity.
The top left shows the target-domain feature distribution to-
gether with the class weights learned from the source do-
main. It can be seen that such a situation makes it difficult
to perform correct classification. In fact, it is expected that
the learned features are discriminative and compact around
the corresponding class weight, as shown on the top right.
To this end, an intuitive way is to make the relationship
of learned class prototypes in the target domain consistent
with that in the source domain. However, it has a very lim-
ited effect on training the target-domain model as such a
prototype-level constraint is too weak for optimization.
In this work, we propose to embed the source-domain
class relationship in contrastive learning, which has been
shown to be outstanding in representation learning [9, 18].
Here we design a novel sample similarity by taking into ac-
countAs. Specifically, we compute the similarity between
two target domain samples in the output space, represented
by the prediction probabilities ( i.e.,pandp′), as shown in
Figure 1. In particular, we consider the inter-class term ( i.e.P
i̸=jpip′
j) in addition to the traditional intra-class term
(i.e.P
ipip′
i). Note that the sum of these two terms equals
one, in which the intra-class term measures the similarity
that two samples come from the same class, while the inter-
class term measures that from different classes. Consider-
ing the relationship between classes, we weight the inter-
class term by the non-diagonal elements in As. In partic-
ular, if the class iis closer to jthan k(i.e.,As
ij> As
ik),
pip′
jwould be more important in calculating the similarity
ofpandp′. By the above design, our proposed similaritycan more accurately express the relationship of two samples
based on output space. For example, among three classes
1,2,3in Figure 1, the classes 1and2are closer. Given three
samples x1, x2, x3belonging to them with the probabilities
[0.9,0.05,0.05],[0.05,0.9,0.05], and [0.05,0.05,0.9]. If
we only use the intra-class term, all three samples have the
same similarity. But for our designed similarity, x1is closer
tox2thanx3, which is more reasonable.
On the basis of the proposed similarity, we further pro-
pose to perform contrastive learning in a unified form. In-
spired by recent success in semi-supervised learning [67,
85, 91] and unsupervised learning [9, 18], we propose
two types of contrastive losses. As shown in Figure 1,
the first one is ClassRelationship embedded Class-Aware
Contrastive (CR-CACo) loss where the high-confident sam-
ples are enforced to be close to the corresponding proto-
type and away from other prototypes. Due to embedding
the prior class relationship, our CR-CACo loss is more ro-
bust to label noise caused by domain shift. The second one
isClass Relationship embedded Instance Discrimination
Contrastive (CR-IDCo) loss where two views of the same
sample are encouraged to be close and away from all other
samples. Benefited from our designed accurate similarity,
the CR-IDCo loss would more effectively learn discrimina-
tive features [9, 18, 75]. Actually, these two losses are com-
plementary to each other, and their combination can achieve
better performance.
Our contributions are summarized as follows:
• We propose to explicitly transfer class relationship for
SFUDA which is more domain-invariant. And we pro-
pose to embed the class relationship into contrastive
learning in order to effectively perform representation
learning in the target domain.
• We propose a novel class relationship embedded sim-
ilarity which can more accurately express the sample
relationship in the output space. Furthermore, we pro-
pose two contrastive losses ( i.e., CR-CACo and CR-
IDCo) exploiting our designed similarity.
• We conduct extensive experiments to evaluate the pro-
posed method, and the results validate the effective-
ness of our method, which achieves the state-of-the-art
performance on multiple SFUDA benchmarks.
|
Yu_Distribution_Shift_Inversion_for_Out-of-Distribution_Prediction_CVPR_2023 | Abstract
Machine learning society has witnessed the emergence
of a myriad of Out-of-Distribution (OoD) algorithms, which
address the distribution shift between the training and the
testing distribution by searching for a unified predictor or
invariant feature representation. However, the task of di-
rectly mitigating the distribution shift in the unseen testing
set is rarely investigated, due to the unavailability of the
testing distribution during the training phase and thus the
impossibility of training a distribution translator mapping
between the training and testing distribution. In this pa-
per, we explore how to bypass the requirement of testing
distribution for distribution translator training and make
the distribution translation useful for OoD prediction. We
propose a portable Distribution Shift Inversion (DSI) algo-
rithm, in which, before being fed into the prediction model,
the OoD testing samples are first linearly combined with ad-
ditional Gaussian noise and then transferred back towards
the training distribution using a diffusion model trained only
on the source distribution. Theoretical analysis reveals the
feasibility of our method. Experimental results, on both
multiple-domain generalization datasets and single-domain
generalization datasets, show that our method provides a
general performance gain when plugged into a wide range
of commonly used OoD algorithms. Our code is available at
https://github.com/yu-rp/Distribution-Shift-Iverson.
| 1. Introduction
The ubiquity of the distribution shift between the training
and testing data in the real-world application of machine
†Corresponding author.learning systems induces the study of Out-of-Distribution
(OoD) generalization (or domain generalization). [18, 64,
71, 79] Within the scope of OoD generalization, machine
learning algorithms are required to generalize from the seen
training domain to the unseen testing domain without the
independent and identically distributed assumption. The
bulk of the OoD algorithms in previous literature focuses
on promoting the generalization capability of the machine
learning models themselves by utilizing the domain invariant
feature [2, 13, 36], context-based data augmentation [47,
74], distributionally robust optimization [59], subnetwork
searching [81], neural network calibration [70], etc.
In this work, orthogonal to enhancing the generalization
capability of the model, we consider a novel pathway to
OoD prediction. On the way, the testing(target) distribution
is explicitly transformed towards the training(source) dis-
tribution to straightforwardly mitigate the distribution shift
between the testing and the training distribution. Therein,
the OoD prediction can be regarded as a two-step procedure,
(1) transferring testing samples back towards training dis-
tribution, and (2) drawing prediction. The second step can
be implemented by any OoD prediction algorithm. In this
paper, we concentrate on the exploration of the first step, the
distribution transformation.
Unlike previous works on distribution translation and do-
main transformation, in which certain target distribution is
accessible during the training phase, here the target distri-
bution is arbitrary and unavailable during the training. We
term this new task as Unseen Distribution Transformation
(UDT), in which a domain translator is trained on the source
distribution and works to transform unseen target distribu-
tion towards the source distribution. The uniqueness, as well
as the superiority, of UDT is listed as followings.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3592
/uni00000057/uni00000003/uni00000020/uni00000003/uni00000013 /uni00000014/uni00000013/uni00000013 /uni00000015/uni00000013/uni00000013 /uni00000057/uni00000003/uni00000020/uni00000003/uni00000016/uni0000001a/uni00000013
/uni00000027/uni0000004c/uni00000049/uni00000049/uni00000058/uni00000056/uni0000004c/uni00000052/uni00000051/uni00000003/uni00000030/uni00000052/uni00000047/uni00000048/uni0000004f/uni00000003/uni00000037/uni00000055/uni00000044/uni00000051/uni00000056/uni00000049/uni00000055/uni00000052/uni00000050/uni00000044/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni000010de /uni00000026/uni0000004f/uni00000044/uni00000056/uni00000056/uni00000003/uni000010df
/uni00000013/uni00000011/uni00000014 /uni00000013
/uni00000037 /uni00000048/uni00000056/uni00000057/uni0000004c/uni00000051/uni0000004a
/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000010/uni00000014/uni00000018/uni00000010/uni00000014/uni00000013/uni00000010/uni00000018/uni00000013/uni00000018/uni00000014/uni00000013/uni00000014/uni00000018/uni00000032/uni00000052/uni00000027/uni00000003/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056
/uni00000013 /uni00000013/uni00000011/uni00000014 /uni00000013/uni00000011/uni00000015 /uni00000013/uni00000011/uni00000016
/uni00000036/uni00000052/uni00000058/uni00000055/uni00000046/uni00000048/uni00000003/uni00000027/uni0000004c/uni00000056/uni00000057/uni00000055/uni0000004c/uni00000045/uni00000058/uni00000057/uni0000004c/uni00000052/uni00000051/uni00000031/uni00000052/uni0000004c/uni00000056/uni00000048/uni00000003/uni00000036/uni00000053/uni00000044/uni00000046/uni00000048
/uni00000024/uni0000004f/uni0000004c/uni0000004a/uni00000051/uni00000050/uni00000048/uni00000051/uni00000057/uni00000037/uni00000055/uni00000044/uni00000051/uni00000056/uni00000049/uni00000052/uni00000055/uni00000050/uni00000048/uni00000047
/uni00000036/uni00000044/uni00000050/uni00000053/uni0000004f/uni00000048/uni00000056Figure 2. Using a Diffusion Model to solve the one-dimensional
UDT. OoD samples are transformed to the source distribution with
limited failure of label preservation.
•UDT puts no requirements on the data from both source
and testing distribution like previous works do. This is
practically valuable, because the real-world testing dis-
tributions are uncountable and dynamically changing.
•UDT is able to transform various distributions using
only one model. However, the previous distribution
translator works for the translation between certain
source and target distributions. With a different source-
target pair, a new translator is required.
•Considering the application of UDT in OoD prediction,
it is free from the extra assumptions commonly used by
the OoD generalization algorithms, such as the multi-
training domain assumption and the various forms of
the domain invariant assumption.
Despite the advantages, the unavailability of the testing
distribution poses new difficulty. Releasing this constraint,
the idea of distribution alignment is well established in do-
main adaptation (DA). Wherein, a distribution translator is
trained with the (pixel, feature, and semantic level) cycle
consistency loss [23, 38, 82]. However, the training of such
distribution transfer modules necessitates the testing distribu-
tion, which is unsuitable under the setting of OoD prediction
and makes the transplant of the methods in DA to OoD even
impossible.
To circumvent the requirement of testing distribution dur-
ing training time, we propose a novel method, named Distri-
bution Shift Inversion (DSI). Instead of using a model trans-
ferring from testing distribution to training distribution, an
unconditional generative model, trained only on the source
distribution, is used, which transfers data from a reference
noise distribution to the source distribution. The method
operates in two successive parts. First, the OoD target dis-
tribution is transferred to the neighborhood of the noise dis-
tribution and aligned with the input of the generative model,
thereafter we refer to this process as the forward transforma-
tion. The crux of this step is designing to what degree the
target distribution is aligned to the noise distribution. In our
implementation, the forward transformation is conducted by
linearly combining the OoD samples and random noise withthe weights controlling the alignment.Then, in the second
step, the outcome of the first step is transferred towards the
source distribution by a generative model, thereafter we refer
to this process as the backward transformation. In this paper,
the generative model is chosen to be the diffusion model [21].
The superiority of the diffusion model is that its input is the
linear combination of the source sample and the noise with
varying magnitude, which is in accord with our design of the
forward transformation and naturally allows strength control.
Comparatively, V AE [28] and GAN [16] have a fixed level
of noise in their input, which makes the forward transfor-
mation strength control indirect. Our theoretical analyses
of the diffusion model also show the feasibility of using the
diffusion model for UDT.
Illustrative Example. A one-dimensional example is
shown in Fig. 2.The example considers a binary classifica-
tion problem, in which, given label, the conditional distribu-
tions of the samples are Gaussian in both the source and the
testing domain. The testing distribution is constructed to be
OoD and located in the region where the source distribution
has a low density. The diffusion model is trained only on the
source distribution. Passing through the noise space align-
ment and diffusion model transformation, the OoD samples
are transformed to the source distribution with limited failure
of label preservation.
Transformed Images. Fig. 1 shows some transformation
results of OoD images towards the source distribution. The
observation is twofold. (1) The distribution (here is the
style) of the images is successfully transformed. All of the
transferred images can be correctly classified by the ERM
model trained on the source domain. (2) The transformed
images are correlated to the original images. Some structural
and color characteristics are mutually shared between them.
This indicates that the diffusion model has extracted some
low-level information and is capable to preserve it during
the transformation. We would like to highlight again that,
during the training, the diffusion model is isolated from the
testing domain.
Our contributions are therefore summarized as:
•We put forward the unseen distribution transformation
(UDT) and study its application to OoD prediction.
•We offer theoretical analyses of the feasibility of UDT.
•we propose DSI, a sample adaptive distribution trans-
formation algorithm for efficient distribution adaptation
and semantic information preservation.
•We perform extensive experiments to demonstrate that
our method is suitable for various OoD algorithms to
achieve performance gain on diversified OoD bench-
marks. On average, adding in our method produces
2.26% accuracy gain on multi-training domain general-
ization datasets and 2.28% on single-training domain
generalization datasets.
3593
|
Zhang_Multi-View_Stereo_Representation_Revist_Region-Aware_MVSNet_CVPR_2023 | Abstract
Deep learning-based multi-view stereo has emerged
as a powerful paradigm for reconstructing the complete
geometrically-detailed objects from multi-views. Most of
the existing approaches only estimate the pixel-wise depth
value by minimizing the gap between the predicted point
and the intersection of ray and surface, which usually ig-
nore the surface topology. It is essential to the textureless
regions and surface boundary that cannot be properly re-
constructed. To address this issue, we suggest to take ad-
vantage of point-to-surface distance so that the model is
able to perceive a wider range of surfaces. To this end,
we predict the distance volume from cost volume to esti-
mate the signed distance of points around the surface. Our
proposed RA-MVSNet is patch-awared, since the percep-
tion range is enhanced by associating hypothetical planes
with a patch of surface. Therefore, it could increase the
completion of textureless regions and reduce the outliers at
the boundary. Moreover, the mesh topologies with fine de-
tails can be generated by the introduced distance volume.
Comparing to the conventional deep learning-based multi-
view stereo methods, our proposed RA-MVSNet approach
obtains more complete reconstruction results by taking ad-
vantage of signed distance supervision. The experiments on
both the DTU and Tanks & Temples datasets demonstrate
that our proposed approach achieves the state-of-the-art re-
sults.
| 1. Introduction
Multi-view stereo (MVS) is able to efficiently recover
geometry from multiple images, which makes use of the
matching relationship and stereo correspondences of over-
lapping images.
To achieve the promising reconstruction results, the con-
ventional patch-based and PatchMatch-based methods [2,
11, 27] require rich textures and restricted lighting condi-
*Corresponding author is Jianke Zhu.
Hypothetical plane point
Baseline
RA-MVSNetAssociated surface point
Figure 1. Comparison on reconstruction results between base-
line and RA-MVSNet. Our RA-MVSNet enables the model to
perceive a wider range of surfaces so as to achieve the promising
performance in complementing textureless regions and removing
outliers at boundaries. Furthermore, our model is able to gener-
ate correct mesh topologies with fine details based on estimated
point-to-surface distances of spatially sampled points.
tions. Alternatively, the deep learning-based approaches [4,
14,15,40] try to take advantage of global scene semantic in-
formation, including environmental illumination and object
materials, to maintain high performance in complex light-
ing. The key of these methods is to warp deep image fea-
tures into the reference camera frustum so that the 3D cost
volume can be built via differentiable homographies. Then,
the depth map is predicted by regularizing cost volume with
3D CNNs.
Despite the encouraging results, the pixel-wise depth es-
timation suffers from two intractable flaws. One is the low
estimation confidence in the textureless area. The other
is many outliers near the boundary of the object. This is
mainly because the surface is usually treated as a set of un-
correlated sample points rather than the one with topology.
As each ray is only associated with a single surface sam-
pling point, it is impossible to pay attention to the adjacent
area of the surface. As shown in Fig. 1, the estimation of
each depth value is constrained by only one surface sample
point, which makes it unable to use the surrounding surface
for inference. Unfortunately, it is difficult to infer without
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17376
broader surface information in textureless regions and ob-
ject boundaries. Therefore, too small perception range lim-
its the existing learning-based MVS methods.
To tackle this issue, we present a novel RA-MVSNet
framework that is able to make each hypothetical plane as-
sociated with a wider surface area through point-to-surface
distance. Thus, our presented method is capable of inferring
the surrounding surface information at textureless areas and
object boundaries. To this end, our network not only es-
timates the probability volume but also predicts the point-
to-surface distance of each hypothetical plane. Specifically,
RA-MVSNet makes use of the cost volume to generate the
probability and distance volumes, which are further com-
bined to estimate the final depth map. The introduction of
point-to-surface distance supervision uses the model patch-
aware in estimating the depth value corresponding to a par-
ticular pixel. This leads to the improved performance in
textureless or boundary areas. Since the distance volume
estimates the length of the sample points near the surface,
we are able to predict a SDF-based implicit representation
with the correct topology and fine details.
In summary, our contribution is three-fold:
• We introduce point-to-surface distance supervision of
sampled points to expand the perception range pre-
dicted by the model, which achieves complete estima-
tion in textureless areas and reduce outliers in object
boundary regions.
• To tackle the challenge of lacking the ground-truth
mesh, we compute the signed distance between point
sets based on the triangulated mesh, which trades off
between accuracy and speed.
• Experimental results on the challenging MVS datasets
show that our proposed approach performs the best
both on indoor dataset DTU [1] and large-scale out-
door dataset Tanks and Temples [17].
|
Yan_Linking_Garment_With_Person_via_Semantically_Associated_Landmarks_for_Virtual_CVPR_2023 | Abstract
In this paper, a novel virtual try-on algorithm, dubbed
SAL-VTON, is proposed, which links the garment with the
person via semantically associated landmarks to alleviate
misalignment. The semantically associated landmarks are
a series of landmark pairs with the same local seman-
tics on the in-shop garment image and the try-on image.
Based on the semantically associated landmarks, SAL-
VTON effectively models the local semantic association
between garment and person, making up for the misalign-
ment in the overall deformation of the garment. The
outcome is achieved with a three-stage framework: 1) the
semantically associated landmarks are estimated using the
landmark localization model; 2) taking the landmarks as
input, the warping model explicitly associates the corre-
sponding parts of the garment and person for obtaining
the local flow, thus refining the alignment in the global
flow; 3) finally, a generator consumes the landmarks
to better capture local semantics and control the try-on
results. Moreover, we propose a new landmark dataset
with a unified labelling rule of landmarks for diverse
styles of garments. Extensive experimental results on
*Co-first authors contributed equally,†Corresponding author.popular datasets demonstrate that SAL-VTON can handle
misalignment and outperform state-of-the-art methods both
qualitatively and quantitatively. The dataset is available on
https://modelscope.cn/datasets/damo/SAL-HG/summary.
| 1. Introduction
In recent years, with the rapid popularization of online
shopping, virtual try-on [6, 9, 16, 32, 52] has attracted
extensive attention for its potential applications. Image-
based virtual try-on [4, 34, 49] aims to synthesize a photo-
realistic try-on image by transferring a garment image onto
the corresponding region of a person. Commonly, there
are significant spatial geometric gaps between the in-shop
garment image and the person image, leading to garments
failing to align the corresponding body parts of person.
To address the above issue, prior arts take geometric
deformation models to align the garment with the person’s
body. Early works [16, 22, 43] widely use the Thin-
Plate Spline (TPS) deformation model [39], whereas the
smoothness constraint of TPS transformation limits the
warping capacity. Recently, the flow operation is applied,
with a high dimension of freedom to warp garments [5, 11,
15, 18]. Nonetheless, the flow operation falls short on gar-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17194
ment regions with large deformation. The aforementioned
methods focus on modeling the overall deformation of the
garment, but ignore the local semantic association between
garment and person. Therefore, when there are large local
deformations of garments, the try-on results usually occur
misalignment, such as missing or mixing garments (see left
part of Fig. 1). To address the local misalignment problem,
Xieet al. [46] introduce the patch-routed disentanglement
module to splice different parts of the garment. However,
this method may result in significant blank spaces between
spliced parts of the garment.
Fortunately, the landmarks in the garment image and the
person image naturally have local semantic associations. As
can be observed from Fig. 2, the pixels around landmark A
on the try-on result should come from the landmark A′area
on the garment. Such a pair of landmarks with the same
local semantics are referred to as semantically associated
landmarks. Based on this observation, this paper presents a
novel virtual try-on algorithm named SAL-VTON, which
links the garment with the person via semantically as-
sociated landmarks to help align the garment with the
person. Notably, the proposed approach varies differently
from the previous landmark-guided try-on methods [28,37].
LM-VTON [28] and LG-VTON [37] utilize landmarks to
supervise the TPS transformation. However, the potential
of local semantic association has not been fully explored,
and the limited degrees of freedom of TPS transformation
further hinder performance improvements. SAL-VTON,
for the first time, introduces the local flow estimated via
semantically associated landmarks to effectively model the
local semantic association. In addition, a generator with
Landmark-Aware Semantic Normalization Layer (LASNL)
is carried out to better capture local semantics.
Specifically, the proposed SAL-VTON consists of three
stages. Firstly, the semantically associated landmarks are
estimated using the landmark localization model. Sub-
sequently, the semantically associated landmarks are em-
ployed as a new representation for virtual try-on, and
fed into the warping model. Based on the semantically
associated landmarks and learnable deformable patches,
the warping model explicitly associates the corresponding
parts of the garment and person to obtain the local flow,
which contributes significantly to refine the poor alignment
in the global flow. Finally, conditioned on the landmarks,
the LASNL generator can achieve improved alignment in
virtual try-on images. The estimated landmarks on the try-
on result assist the generator in determining if a specific
region needs to generate corresponding garment parts. In
this way, SAL-VTON can effectively model the local
semantic association between the garment and the person,
making up for the misalignment in the overall deformation
of the garment. Moreover, the try-on results of SAL-VTON
can be precisely controlled by manually manipulating the
Person Garment Try-on result
Figure 2. An example for the semantically associated landmarks
on the in-shop garment image and the try-on image.
landmarks (see right part of Fig. 1).
To this end, we re-annotate images on the popular virtual
try-on benchmarks including VITON [16] and VITON-HD
[4] datasets. Existing popular clothing landmark datasets
[12, 54] adopt different landmark definitions for different
categories of garments. In contrast to other datasets, we
adopt a unified labelling rule of landmarks for diverse styles
of garments, including both standard and non-standard va-
rieties. In the proposed dataset1, every image is annotated
with 32 landmarks, each of which possesses three kinds of
attributes: visible, occluded and absent. The landmarks
with the same serial number have the same semantics,
which enhances the universality of the dataset.
This work makes the following main contributions : (1)
A novel virtual try-on algorithm, SAL-VTON, is proposed,
which links the garment with the person via semantically
associated landmarks. SAL-VTON, for the first time, intro-
duces the local flow that can alleviate the misalignment and
the LASNL generator for virtual try-on. (2) A new land-
mark dataset is proposed, providing a new representation
for virtual try-on, with a unified labelling rule of landmarks
for diverse styles of garments. (3) Extensive experiments
over two popular datasets demonstrate that SAL-VTON is
capable of handling misalignment and significantly out-
performs other state-of-the-art methods. Furthermore, the
extended experiments show that the virtual try-on results
can be edited via the landmarks.
|
Zhang_Weakly_Supervised_Video_Emotion_Detection_and_Prediction_via_Cross-Modal_Temporal_CVPR_2023 | Abstract
Automatically predicting the emotions of user-generated
videos (UGVs) receives increasing interest recently. How-
ever, existing methods mainly focus on a few key visual
frames, which may limit their capacity to encode the con-
text that depicts the intended emotions. To tackle that,
in this paper, we propose a cross-modal temporal eras-
ing network that locates not only keyframes but also con-
text and audio-related information in a weakly-supervised
manner. In specific, we first leverage the intra- and
inter-modal relationship among different segments to ac-
curately select keyframes. Then, we iteratively erase
keyframes to encourage the model to concentrate on the
contexts that include complementary information. Exten-
sive experiments on three challenging video emotion bench-
marks demonstrate that our method performs favorably
against state-of-the-art approaches. The code is released
on https://github.com/nku-zhichengzhang/WECL.
| 1. Introduction
Emotion analysis in user-generated videos (UGVs) has
attracted much attention since a growing number of people
tend to express their views on social networks [20, 32, 36].
Automatic predictions of video emotions [52, 54] can po-
tentially be applied in various areas like online content fil-
tering [1], attitude recognition [32], and customer behavior
analysis [39]. Emotions evoked in UGVs usually depend on
multiple perspectives, such as actions, events, and objects,
where different frames in a UGV may contribute unequally
to conveying emotions.
Existing methods in this field mainly focus on extracting
keyframes from visual content, assuming that these frames
hold the dominant information for the intended emotions
in videos. For example, Tu et al. [12] introduce an attribu-
tion network to locate keyframes with temporal annotations,
which are more precise than video-level labeling and lead
† Corresponding author.
Figure 1. Illustration of the keyframes with larger boxes that are
detected by off-the-shelf method [65] on the Ekman-6 dataset [54].
Note that the deeper color represents the higher impact on overall
video emotion, and “GT” represents the category label from the
ground truth. The texts with orange color are the categorical pre-
dicting results.
to better performance for video emotion recognition. Yet,
annotating the emotional labels frame-by-frame is labor-
sensitive and time-consuming [63]. Zhao et al . [65] fur-
ther present the visual-audio network (V AANet) conduct-
ing three types of attention to automatically discover the
discriminative keyframes, which makes it state-of-the-art.
However, the selected “keyframes” may fail to represent
the intended emotions exactly due to the inherent charac-
teristics of human emotions, i.e., subjectivity and ambigu-
ity [49, 57, 66]. As illustrated in Figure 1 (a), a woman
receives a gift and moves to cry. The video-level emo-
tion category is labeled as ‘surprise’. We could observe
that V AANet [65] gives the most attention to the keyframes
(i.e., “crying” frames in the larger boxes) while ignoring
the context and leading to the wrong prediction. Further-
more, in Figure 1 (b), a man sees his beloved girl talking
with another man happily, which makes him feel sad. How-
ever, V AANet only focuses on frames about chatting and
categorizes the emotion of this video as ‘joy’. Therefore,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18888
keyframes may lead to limited prediction results. Although
the detected keyframes directly convey emotions in most
videos, some other information that contains the necessary
context should not be ignored. This is because the contex-
tual frames could not only provide complementary infor-
mation for understanding emotions in UGVs (especially for
the cases where keyframes are hard to be distinguished),
but also make the model more robust since more cues are
considered to recognize emotions rather than the dominant
information.
To address these problems, we propose a novel cross-
modal temporal erasing network, which enables the model
to be aware of context for recognizing emotions. Our pro-
posed method mainly contains two crucial modules: tempo-
ral correlation learning module, and temporal erasing mod-
ule. First, we extract the visual feature together with the
audio one from each equal-length segment derived from the
video. Second, the temporal correlation learning module
is introduced to fully discover comprehensive implicit cor-
respondences among different segments across audio and
visual modal. Then, keyframes are selected by consider-
ing the correspondences with other frames in a weakly-
supervised manner, where only the video-level class label is
used. Finally, the temporal erasing module iteratively erases
the most dominant visual and audio information to train
with difficult samples online, that encouraging the model
to detect more complementary information from context in-
stead of the most dominant ones.
Our contributions can be summarized as follows: 1)
We introduce a weakly-supervised network to exploit
keyframes and the necessary context in a unified CNN
framework, which encourages model to extract features
from multiple discriminative parts and learn better represen-
tation for video emotion analysis. 2) We exploit intra- and
inter-modal relationships to provide frame-level localized
information only with video-level annotation, with which
the model consolidates both holistic and local representa-
tions for affective computing. We demonstrate the advan-
tages of the above contributions with extensive experiments.
Our method achieves state-of-the-art on three video emo-
tion datasets.
|
Zhang_CloSET_Modeling_Clothed_Humans_on_Continuous_Surface_With_Explicit_Template_CVPR_2023 | Abstract
Creating animatable avatars from static scans re-
quires the modeling of clothing deformations in different
poses. Existing learning-based methods typically add pose-
dependent deformations upon a minimally-clothed mesh
template or a learned implicit template, which have limita-
tions in capturing details or hinder end-to-end learning. In
this paper, we revisit point-based solutions and propose to
decompose explicit garment-related templates and then add
pose-dependent wrinkles to them. In this way, the clothing
deformations are disentangled such that the pose-dependent
wrinkles can be better learned and applied to unseen poses.
Additionally, to tackle the seam artifact issues in recent
state-of-the-art point-based methods, we propose to learn
point features on a body surface, which establishes a contin-
uous and compact feature space to capture the fine-grained
and pose-dependent clothing geometry. To facilitate the re-
search in this field, we also introduce a high-quality scan
dataset of humans in real-world clothing. Our approach is
validated on two existing datasets and our newly introduced
dataset, showing better clothing deformation results in un-
seen poses. The project page with code and dataset can be
found at https://www.liuyebin.com/closet.
| 1. Introduction
Animating 3D clothed humans requires the modeling of
pose-dependent deformations in various poses. The diver-
sity of clothing styles and body poses makes this task ex-
tremely challenging. Traditional methods are based on ei-
ther simple rigging and skinning [4,20,33] or physics-based
simulation [14, 23, 24, 49], which heavily rely on artist ef-
forts or computational resources. Recent learning-based
methods [10, 36, 39, 59] resort to modeling the clothing de-
formation directly from raw scans of clothed humans. De-
spite the promising progress, this task is still far from be-
ing solved due to the challenges in clothing representations,
generalization to unseen poses, and data acquisition, etc.
Figure 1. Our method learns to decompose garment templates (top
row) and add pose-dependent wrinkles upon them (bottom row).
For the modeling of pose-dependent garment geome-
try, the representation of clothing plays a vital role in a
learning-based scheme. As the relationship between body
poses and clothing deformations is complex, an effec-
tive representation is desirable for neural networks to cap-
ture pose-dependent deformations. In the research of this
line, meshes [2, 12, 38], implicit fields [10, 59], and point
clouds [36, 39] have been adopted to represent clothing. In
accordance with the chosen representation, the clothing de-
formation and geometry features are learned on top of a
fixed-resolution template mesh [8, 38], a 3D implicit sam-
pling space [10,59], or an unfolded UV plane [2,12,36,39].
Among these representations, the mesh is the most efficient
one but is limited to a fixed topology due to its discretization
scheme. The implicit fields naturally enable continuous fea-
ture learning in a resolution-free manner but are too flexible
to satisfy the body structure prior, leading to geometry arti-
facts in unseen poses. The point clouds enjoy the compact
nature and topology flexibility and have shown promising
results in the recent state-of-the-art solutions [36,39] to rep-
resent clothing, but the feature learning on UV planes still
leads to discontinuity artifacts between body parts.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
501
To model the pose-dependent deformation of clothing,
body templates such as SMPL [34] are typically leveraged
to account for articulated motions. However, a body tem-
plate alone is not ideal, since the body template only models
the minimally-clothed humans and may hinder the learning
of actual pose-dependent deformations, especially in cases
of loose clothing. To overcome this issue, recent implicit
approaches [59] make attempts to learn skinning weights in
the 3D space to complement the imperfect body templates.
However, their pose-dependent deformations are typically
coarse due to the difficulty in learning implicit fields. For
explicit solutions, the recent approach [32] suggests learn-
ing coarse templates implicitly at first and then the pose-
dependent deformations explicitly. Despite its effective-
ness, such a workaround requires a two-step modeling pro-
cedure and hinders end-to-end learning.
In this work, we propose CloSET, an end-to-end method
to tackle the above issues by modeling Clothed humans on a
continuous Surface with Explicit Template decomposition.
We follow the spirit of recent state-of-the-art point-based
approaches [32, 36, 39] as they show the efficiency and po-
tential in modeling real-world garments. We take steps for-
ward in the following aspects for better point-based model-
ing of clothed humans. First, we propose to decompose the
clothing deformations into explicit garment templates and
pose-dependent wrinkles. Specifically, our method learns a
garment-related template and adds the pose-dependent dis-
placement upon them, as shown in Fig. 1. Such a garment-
related template preserves a shared topology for various
poses and enables better learning of pose-dependent wrin-
kles. Different from the recent solution [32] that needs
two-step procedures, our method can decompose the ex-
plicit templates in an end-to-end manner with more gar-
ment details. Second, we tackle the seam artifact issues
that occurred in recent point-based methods [36, 39]. In-
stead of using unfolded UV planes, we propose to learn
point features on a body surface, which supports a continu-
ous and compact feature space. We achieve this by learn-
ing hierarchical point-based features on top of the body
surface and then using barycentric interpolation to sample
features continuously. Compared to feature learning in the
UV space [39], on template meshes [8, 38], or in the 3D
implicit space [57, 59], our body surface enables the net-
work to capture not only fine-grained details but also long-
range part correlations for pose-dependent geometry mod-
eling. Third, we introduce a new scan dataset of humans in
real-world clothing, which contains more than 2,000 high-
quality scans of humans in diverse outfits, hoping to facili-
tate the research in this field. The main contributions of this
work are summarized below:
• We propose a point-based clothed human modeling
method by decomposing clothing deformations into
explicit garment templates and pose-dependent wrin-kles in an end-to-end manner. These learnable tem-
plates provide a garment-aware canonical space so that
pose-dependent deformations can be better learned and
applied to unseen poses.
• We propose to learn point-based clothing features on
a continuous body surface, which allows a continu-
ous feature space for fine-grained detail modeling and
helps to capture long-range part correlations for pose-
dependent geometry modeling.
• We introduce a new high-quality scan dataset of
clothed humans in real-world clothing to facilitate the
research of clothed human modeling and animation
from real-world scans.
|
Xu_Multi-View_Adversarial_Discriminator_Mine_the_Non-Causal_Factors_for_Object_Detection_CVPR_2023 | Abstract
Domain shift degrades the performance of object de-
tection models in practical applications. To alleviate the
influence of domain shift, plenty of previous work try to
decouple and learn the domain-invariant (common) fea-
tures from source domains via domain adversarial learn-
ing (DAL). However, inspired by causal mechanisms, we
find that previous methods ignore the implicit insignificant
non-causal factors hidden in the common features. This is
mainly due to the single-view nature of DAL. In this work,
we present an idea to remove non-causal factors from com-
mon features by multi-view adversarial training on source
domains, because we observe that such insignificant non-
causal factors may still be significant in other latent spaces
(views) due to the multi-mode structure of data. To sum-
marize, we propose a Multi-view Adversarial Discriminator
(MAD) based domain generalization model, consisting of a
Spurious Correlations Generator (SCG) that increases the
diversity of source domain by random augmentation and a
Multi-View Domain Classifier (MVDC) that maps features
to multiple latent spaces, such that the non-causal factors
are removed and the domain-invariant features are purified.
Extensive experiments on six benchmarks show our MAD
obtains state-of-the-art performance.
| 1. Introduction
The problem of how to adapt object detectors to un-
known target domains in real world has drawn increasing
attention. Traditional object detection methods [11, 12, 25,
29,30] are based on independent and identically distributed
(i.i.d.) hypothesis, which assume that the training and test-
ing datasets have the same distribution. However, the target
distribution can hardly be estimated in real world and dif-
fers from the source domains, which is coined as domain
*Corresponding author (Lei Zhang)
Domain classifier
FeatureExtractorDomain classifierResnet blockSunnyFoggy
Figure 1. Illustration of the biased learning of conventional DAL.
The domain classifier easily encounters early stop and fails.
shift [38]. And the performance of object detection models
will sharply drop when facing the domain shift problem.
Domain adaptation (DA) [3, 6, 17, 34, 40, 44, 52] is pro-
posed to deal with the domain shift problem, which enables
the model to be adapted to the target distribution by aligning
features extracted from the source and unlabeled target do-
mains. However, the requirement of target domain datasets
still limits the applicability of DA methods in reality. Do-
main generalization (DG) [49] goes one step further, aiming
to train a model from single or multiple source domains that
can generalize to unknown target domains.
Although lots of DG methods have been proposed in the
image classification field, there are still some unresolved
problems. In our opinion, the common features extracted by
previous DG methods are still not pure enough. The main
reason is that through a single-view domain discriminator in
DAL, only the significant domain style information can be
removed, while some implicit and insignificant non-causal
factors in source domains may be absorbed by the feature
extractor as a part of common features. This has never been
noticed. This implies the multi-mode structure of data and
single-view domain discriminator cannot fully interpret the
data. There is a piece of evidence to support our claim.
To confirm our suspicions on the domain discriminator,
we designed a validation experiment. As is shown in Fig. 1,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8103
Causal
Domain common
But
Non-causalDomain specific
and Non -causal
Sunny
FoggyLight color
Background
Resolution IlluminationCamera angle
Size
Significant domain differences Potential domain differencesFigure 2. Relationships among causal factors, noncausal factors,
domain specific feature and domain common feature.
we use DANN model [9] with DAL strategy to train a com-
mon feature extractor. When domain classifier converges,
we freeze feature extractor and re-train domain classifier
with a newly added residual block [14]. We observe an
interesting phenomenon: when re-trained with the newly
added residual block, the domain classifier loss continues
to decline . That is, some domain-specific information still
exists. This phenomenon confirms our claim that in existing
DG, DAL cannot explore and remove all domain specific
features. This is because domain classifier only observes
significant domain-specific feature in a single-view, while
insignificant domain specific features in one view (space)
can be significant in other views (latent spaces).
Based on the former experiment, we propose that min-
ing common features through DAL in single-view on a lim-
ited number of domains is insufficient. By using traditional
DAL, only the primary style information w.r.t. domain la-
bels can be removed. Here we analyse this problem from
the perspective of causality. As shown in Fig. 2, in a lim-
ited number of domains, the common features still contain
non-causal factors such as light color, illumination, back-
ground, etc., which is expressed as the orange arrows in the
figure. And such insignificant non-causal factors observed
from one view may still be significant uninformative fea-
tures in other latent spaces (views). So a natural idea is
to explore and remove the implicit non-causal information
from multiple views and purify the common features for
generalizing to unseen domains.
In order to remove the potential non-causal information,
we rethink the domain discriminator in DAL and propose
a multi-view adversarial domain discriminator (MAD) that
can observe the implicit insignificant non-causal factors. In
our life, in order to get the whole architecture of an object,
we often need to observe it from multiple views/profiles. A
toy example is shown in Fig. 3 (left part). When we observe
the Penrose triangle from one specific view, we might mis-
classify it as a triangle , ignoring that it might also appear
to be Lfrom another perspective. Following this intuition,
we construct a Multi-View Domain Classifier (MVDC) that
can discriminate features in multiple views. Specifically,
photo art cartoon sketchview 1
view 2
view 3
Figure 3. An illustration of the multi-view idea and effect of MAD.
Left: a toy example. Right: attention heatmaps of different views.
we simulate multi-view observations by mapping the fea-
tures to different latent spaces with auto-encoders [16], and
discriminate these transformed features via multi-view do-
main classifiers. By mining and removing as many non-
causal factors as possible, MVDC encourages the feature
extractor to learn more domain-invariant but causal factors.
We conduct an experiment based on MVDC and show the
heatmaps from different views in Fig. 3 (right part), which
verifies our idea that different noncausal factors can be un-
veiled in different views.
Although the Multi-View Domain Classifier can remove
the implicit non-causal features in principle, it still implies
a sufficient diversity of source domains during training. So
we further design a Spurious Correlation Generator (SCG)
to increase the diversity of source domains. Our SCG gen-
erates non-causal spurious connections by randomly trans-
forming the low-frequency and extremely high-frequency
components, as [19] points out that in the spectrum of im-
ages, the extremely high and low frequency parts contain
the majority of domain-specific components.
Combining MVDC and SCG, the Multi-view Adversar-
ial Discriminator (MAD) is formalized. Cross-domain ex-
periments on six standard datasets show our MAD achieves
the SOTA performance compared to other mainstream
DGOD methods. The contributions are three-fold:
1. We point out that existing DGOD work focuses on ex-
tracting common features but fails to mine and remove the
potential spurious correlations from a causal perspective.
2. We propose a Multi-view Adversarial Discriminator
(MAD) to eliminate implicit non-causal factors by discrim-
inating non-causal factors from multiple views and extract-
ing domain-invariant but causal features.
3. We test and analyze our method on standard datasets,
verifying the effectiveness and superiority of our method.
|
Zhang_Nerflets_Local_Radiance_Fields_for_Efficient_Structure-Aware_3D_Scene_Representation_CVPR_2023 | Abstract
We address efficient and structure-aware 3D scene rep-
resentation from images. Nerflets are our key contribution–
a set of local neural radiance fields that together represent
a scene. Each nerflet maintains its own spatial position, ori-
entation, and extent, within which it contributes to panop-
tic, density, and radiance reconstructions. By leveraging
only photometric and inferred panoptic image supervision,
we can directly and jointly optimize the parameters of a set
of nerflets so as to form a decomposed representation of the
scene, where each object instance is represented by a group
of nerflets. During experiments with indoor and outdoor en-
vironments, we find that nerflets: (1) fit and approximate the
scene more efficiently than traditional global NeRFs, (2) al-
low the extraction of panoptic and photometric renderings
from arbitrary views, and (3) enable tasks rare for NeRFs,
such as 3D panoptic segmentation and interactive editing.
Our project page.
| 1. Introduction
This paper aims to produce a compact, efficient, and
comprehensive 3D scene representation from only 2D im-
ages. Ideally, the representation should reconstruct appear-ances, infer semantics, and separate object instances, so that
it can be used in a variety of computer vision and robotics
tasks, including 2D and 3D panoptic segmentation, interac-
tive scene editing, and novel view synthesis.
Many previous approaches have attempted to generate
rich 3D scene representations from images. PanopticFu-
sion [34] produces 3D panoptic labels from images, though
it requires input depth measurements from specialized sen-
sors. NeRF [32] and its descendants [3, 4,33,39] produce
3D density and radiance fields that are useful for novel
view synthesis, surface reconstruction, semantic segmenta-
tion [50, 60], and panoptic segmentation [5, 21]. However,
existing approaches require 3D ground truth supervision,
are inefficient, or do not handle object instances.
We propose nerflets, a 3D scene representation with mul-
tiple local neural fields that are optimized jointly to describe
the appearance, density, semantics, and object instances in a
scene (Figure 1). Nerflets constitute a structured andirreg-
ular representation– each is parameterized by a 3D center, a
3D XYZ rotation, and 3 (per-axis) radii in a 9-DOF coordi-
nate frame. The influence of every nerflet is modulated by a
radial basis function (RBF) which falls off with increasing
distance from the nerflet center according to its orientation
and radii, ensuring that each nerflet contributes to a local
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8274
part of the scene. Within that region of influence, each ner-
flet has a miniature MLP to estimate density and radiance. It
also stores one semantic logit vector describing the category
(e.g., “car”) of the nerflet, and one instance label indicating
which real-world object it belongs to (e.g., “the third car”).
In Figure 1, each ellipsoid is a single nerflet, and they are
colored according to their semantics.
A scene can contain any number of nerflets, they may
be placed anywhere in space, and they may overlap, which
provides the flexibility to model complex, sparse 3D scenes
efficiently. Since multiple nerflets can have the same in-
stance label, they can combine to represent the density and
radiance distributions of complex object instances. Con-
versely, since each nerflet has only one instance label, the
nerflets provide a complete decomposition of the scene into
real-world objects. Nerflets therefore provide a 3D panoptic
decomposition of a scene that can be rendered and edited.
Synthesizing images using nerflets proceeds with
density-based volume rendering just as in NeRF [32]. How-
ever, instead of evaluating one large MLP at each point sam-
ple along a ray, we evaluate the small MLPs of only the ner-
flets near a sample. We average the results, weighting by
the influence each nerflet has over that sample. The ren-
dering is fully-differentiable with respect to all continuous
parameters of the nerflets. Fitting the nerflet representation
is performed from a set of posed RGB images with a single
training stage. After training, instance labels are assigned
based on the scene structure, completing the representation.
Experiments with indoor and outdoor datasets confirm
the main benefits of nerflets. We find that: 1) Parsimony en-
courages the optimizer to decompose the scene into nerflets
with consistent projections into novel panoptic images (Sec-
tion 4.1); 2) Semantic supervision can be beneficial to novel
view synthesis (Section 4.2); 3) Structure encourages effi-
ciency, compactness, and scalability (Section 3.4); and 4)
the explicit decomposition of a scene improves human inter-
pretability for easy interactive editing, including adding and
removing objects (Section 4.1). These benefits enable state-
of-the-art performance on the KITTI360 [24] novel seman-
tic view synthesis benchmark, competitive performance on
ScanNet 3D panoptic segmentation tasks with more limited
supervision, and an interactive 3D editing tool that lever-
ages the efficiency and 3D decomposition of nerflets.
The following summarizes our main contributions:
• We propose a novel 3D scene representation made of
small, posed, local neural fields named nerflets.
• The pose, shape, panoptic, and appearance information
of nerflets are all fit jointly in a single training stage, re-
sulting in a comprehensive learned 3D decomposition
from real RGB images of indoor or outdoor scenes.
• We test nerflets on 4 tasks- novel view synthesis,
panoptic view synthesis, 3D panoptic segmentationand reconstruction, and interactive editing.
• We achieve 1st place on the KITTI-360 semantic novel
view synthesis leaderboard.
|
Yang_Complementary_Intrinsics_From_Neural_Radiance_Fields_and_CNNs_for_Outdoor_CVPR_2023 | Abstract
Relighting an outdoor scene is challenging due to the
diverse illuminations and salient cast shadows. Intrinsic
image decomposition on outdoor photo collections could
partly solve this problem by weakly supervised labels with
albedo and normal consistency from multi-view stereo. With
neural radiance fields (NeRF), editing the appearance code
could produce more realistic results without interpreting
the outdoor scene image formation explicitly. This paper
proposes to complement the intrinsic estimation from vol-
ume rendering using NeRF and from inversing the photo-
metric image formation model using convolutional neural
networks (CNNs). The former produces richer and more
reliable pseudo labels (cast shadows and sky appearances
in addition to albedo and normal) for training the latter to
predict interpretable and editable lighting parameters via a
single-image prediction pipeline. We demonstrate the ad-
vantages of our method for both intrinsic image decompo-
sition and relighting for various real outdoor scenes.
| 1. Introduction
The same landmark may appear with drastically vary-
ing appearances in different photos, even if they are taken
from the same viewpoint with the same camera parameters,
e.g., the Taj Mahal may look golden or white at sunset or
in the afternoon1. For a set of photos containing the same
landmark captured in different seasons and times, their “dy-
namic” lighting changes (compared with the relatively “sta-
ble” geometry and reflectance) play a vital role in explain-
ing such great appearance variations. If we can indepen-
dently manipulate lighting in these photos, the relighted
outdoor scenes could substantially improve experiences for
taking digital photographs.
Outdoor scene relighting could be realized by learning a
style transfer procedure [1, 5]. Such a process only requires
#Contributed equally to this work as first authors
∗Corresponding author
1Changing colours of Taj Mahal: agratajcitytour.coma single reference image for editing a target image, but it ap-
parently retouches the image to “look like” each other, with-
out explicitly modeling the lighting changes. Intrinsic im-
age decomposition, which inversely decomposes the photo-
metric image formation model, has been extended to work
with outdoor photo collections [32–34]. The common ge-
ometry/reflectance (for the whole collection) and distinctive
lighting components (for each image) are estimated using
deep convolutional neural networks (CNNs), so that relight-
ing could be achieved by keeping the former while editing
the latter in a physics-aware manner. These methods ex-
plicitly conduct computationally expensive multi-view re-
construction of the scene at the training stage. The weakly
supervised constraints built upon albedo and normal con-
sistency via multi-view correspondence cannot support the
handling of cast shadows [33] or still struggle with strong
cast shadows [32].
Recently, the emerging of neural radiance fields
(NeRF) [23] has not only boosted the performance of novel
view synthesis with significantly better quality for outdoor
scene photo collections [22], but has also been demon-
strated to be capable of transferring the lighting appearance
across the image set using hallucination [4] or a parametric
lighting model [28]. However, these existing NeRF meth-
ods in the outdoor scene either miss the explanation to some
important intrinsics for relighting such as cast shadows (ex-
cept for [28]) or ignore distinctive characteristics between
the non-sky and sky regions, in a physically interpretable
manner.
In this paper, we hope to conduct outdoor scene re-
lighting by mutually complementing intrinsics estimated
from NeRF and CNN and taking advantages of the com-
prehensive representation power of NeRF and physics in-
terpretability of CNN-based single-image decomposition,
in asingle-image inference pipeline as shown in Figure 1.
We formulate the color formation of a pixel as a combina-
tion of objects and the sky by tracing rays from camera ori-
gins to the furthest plane and accumulating the voxel intrin-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16600
𝛾!(𝐱)𝛾"(𝐝)
Input image 𝐈
Ray sampling𝐱#
Cameraparameters𝐫=𝐨+𝑡𝐝𝐱#$%
IntrinsicCNN
LightingCNN
Reconstruction ,𝐂Reconstruction .𝐈SH lighting𝐋
Color ,𝐂&'Albedo 𝐀Shadow 𝐒Normal 𝐍Albedo 3𝐀Shadow 4𝐒Normal 3𝐍Sky mask 𝐌()*Rendered ,𝐂()*Relighting renderer IntrinsicsSky renderingPseudo-supervision
𝐹+,-,.
𝐹/-012,
𝐹(3/2,'
𝐹21&(45*
𝐹()*
Figure 1. Illustration of our overall pipeline. For each image, the corresponding camera rays and sampled 3D points are positionally-
encoded for MLPs (gray block). Our IntrinsicCNN and LightingCNN modules derive lighting-(in)dependent intrinsic components and
second-order spherical harmonics (SH) lighting coefficients (green block), while Our NeRF module provides the pseudo labels of intrinsics
and sky mask by volumetric rendering through MLPs (blue block). Our SkyMLP module renders the sky with viewing direction and
extracted lighting (orange block). Given lighting extracted from the input/reference image, the reconstructed/relighted image is rendered
by photometric image formation along with the rendered sky (yellow block).
sics. We then propose a modified NeRF system to estimate
diffuse albedo, surface normal, cast shadow, and illumina-
tion parameters. Our NeRF rendering naturally shares the
albedo and normal of one point across all images and inter-
prets the geometry from voxel density, which provides more
accurate pseudo labels for identifying shadows than purely
CNN-based approaches [32–34]. We finally predict the in-
trinsics and lighting parameters by designing two separate
CNN modules based on the NeRF-produced pseudo labels
with a clearer separation of lighting-dependent and indepen-
dent intrinsic components to achieve high-quality relighting
via single-image prediction.
Hence, our contribution becomes clear in three folds:
Based on 1) the newly proposed “object-sky” hybrid image
formation, 2) the intrinsics estimated from NeRF provide
more accurate pseudo labels to complement 3) the intrin-
sics estimated from CNNs, for conducting outdoor scene
relighting using a single image in a physically interpretable
manner and with a visually pleasing appearance, which is
demonstrated by our experimental results.
|
Yu_Zero-Shot_Referring_Image_Segmentation_With_Global-Local_Context_Features_CVPR_2023 | Abstract
Referring image segmentation (RIS) aims to find a seg-
mentation mask given a referring expression grounded to a
region of the input image. Collecting labelled datasets for
this task, however, is notoriously costly and labor-intensive.
To overcome this issue, we propose a simple yet effective
zero-shot referring image segmentation method by leverag-
ing the pre-trained cross-modal knowledge from CLIP . In
order to obtain segmentation masks grounded to the input
text, we propose a mask-guided visual encoder that cap-
tures global and local contextual information of an input im-
age. By utilizing instance masks obtained from off-the-shelf
mask proposal techniques, our method is able to segment
fine-detailed instance-level groundings. We also introduce a
global-local text encoder where the global feature captures
complex sentence-level semantics of the entire input expres-
sion while the local feature focuses on the target noun phrase
extracted by a dependency parser. In our experiments, the
proposed method outperforms several zero-shot baselines of
the task and even the weakly supervised referring expression
segmentation method with substantial margins. Our code is
available at https://github.com/Seonghoon-Yu/Zero-shot-RIS.
| 1. Introduction
Recent advances of deep learning has revolutionised com-
puter vision and natural language processing, and addressed
various tasks in the field of vision-and-language [4, 19, 27,
28, 36, 43, 50]. A key element in the recent success of the
multi-modal models such as CLIP [43] is the contrastive
image-text pre-training on a large set of image and text pairs.
It has shown a remarkable zero-shot transferability on a wide
range of tasks, such as object detection [9, 10, 13], semantic
segmentation [7, 12, 59, 63], image captioning [40], visual
question answering (VQA) [47] and so on.
Despite its good transferability of pre-trained multi-modal
models, it is not straightforward to handle dense prediction
tasks such as object detection and image segmentation. A
pixel-level dense prediction task is challenging since there
Figure 1. Illustrations of the task of referring image segmenta-
tion and motivations of global-local context features. To find the
grounded mask given an expression, we need to understand the
relations between the objects as well as their semantics.
is a substantial gap between the image-level contrastive pre-
training task and the pixel-level downstream task such as se-
mantic segmentation. There have been several attempts to re-
duce gap between two tasks [44,54,63], but these works aim
to fine-tune the model consequently requiring task-specific
dense annotations, which is notoriously labor-intensive and
costly.
Referring image segmentation is a task to find the specific
region in an image given a natural language text describing
the region, and it is well-known as one of challenging vision-
and-language tasks. Collecting annotations for this task is
even more challenging as the task requires to collect precise
referring expression of the target region as well as its dense
mask annotation. Recently, a weakly-supervised referring
image segmentation method [48] is proposed to overcome
this issue. However, it still requires high-level text expres-
sion annotations pairing with images for the target datasets
and the performance of the method is far from that of the
supervised methods. To tackle this issue, in this paper, we fo-
cus on zero-shot transferring from the pre-trained knowledge
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19456
of CLIP to the task of referring image segmentation.
Moreover, this task is challenging because it requires
high-level understanding of language and comprehensive
understanding of an image, as well as a dense instance-level
prediction. There have been several works for zero-shot
semantic segmentation [7, 12, 59, 63], but they cannot be
directly extended to the zero-shot referring image segmenta-
tion task because it has different characteristics. Specifically,
the semantic segmentation task does not need to distinguish
instances, but the referring image segmentation task should
be able to predict an instance-level segmentation mask. In
addition, among multiple instances of the same class, only
one instance described by the expression must be selected.
For example, in Figure 1, there are two cats in the input
image. If the input text is given by “a cat is lying on the seat
of the scooter” , the cat with the green mask is the proper
output. To find this correct mask, we need to understand the
relation between the objects ( i.e.“lying on the seat” ) as well
as their semantics ( i.e.“cat”, “scooter” ).
In this paper, we propose a new baseline of zero-shot refer-
ring image segmentation task using a pre-trained model from
CLIP, where global and local contexts of an image and an ex-
pression are handled in a consistent way. In order to localize
an object mask region in an image given a textual referring
expression, we propose a mask-guided visual encoder that
captures global and local context information of an image
given a mask. We also present a global-local textual encoder
where the local-context is captured by a target noun phrase
and the global context is captured by a whole sentence of the
expressions. By combining features in two different context
levels, our method is able to understand a comprehensive
knowledge as well as a specific trait of the target object. Note
that, although our method does not require any additional
training on CLIP model, it outperforms all baselines and
the weakly supervised referring image segmentation method
with a big margin.
Our main contributions can be summarised as follows:
•We propose a new task of zero-shot referring image
segmentation based on CLIP without any additional
training. To the best of our knowledge, this is the first
work to study the zero-shot referring image segmenta-
tion task.
•We present a visual encoder and a textual encoder that
integrates global and local contexts of images and sen-
tences, respectively. Although the modalities of two
encoders are different, our visual and textual features
are dealt in a consistent way.
•The proposed global-local context features take full
advantage of CLIP to capture the target object seman-
tics as well as the relations between the objects in both
visual and textual modalities.•Our method consistently shows outstanding results com-
pared to several baseline methods, and also outperforms
the weakly supervised referring image segmentation
method with substantial margins.
|
Yang_QPGesture_Quantization-Based_and_Phase-Guided_Motion_Matching_for_Natural_Speech-Driven_Gesture_CVPR_2023 | Abstract
Speech-driven gesture generation is highly challenging
due to the random jitters of human motion. In addition,
there is an inherent asynchronous relationship between hu-
man speech and gestures. To tackle these challenges, we in-
troduce a novel quantization-based and phase-guided mo-
tion matching framework. Specifically, we first present a
gesture VQ-VAE module to learn a codebook to summa-
rize meaningful gesture units. With each code representing
a unique gesture, random jittering problems are alleviated
effectively. We then use Levenshtein distance to align di-
verse gestures with different speech. Levenshtein distance
based on audio quantization as a similarity metric of cor-
responding speech of gestures helps match more appropri-
ate gestures with speech, and solves the alignment prob-
lem of speech and gestures well. Moreover, we introduce
phase to guide the optimal gesture matching based on the
semantics of context or rhythm of audio. Phase guides when
text-based or speech-based gestures should be performed
to make the generated gestures more natural. Extensive
experiments show that our method outperforms recent ap-
proaches on speech-driven gesture generation. Our code,
database, pre-trained models and demos are available at
https://github.com/YoungSeng/QPGesture .
| 1. Introduction
Nonverbal behavior plays a key role in conveying mes-
sages in human communication [26], including facial ex-
pressions, hand gestures and body gestures. Co-speech ges-
ture helps better self-expression [45]. However, producing
human-like and speech-appropriate gestures is still very dif-
ficult due to two main challenges: 1) Random jittering :
People make many small jitters and movements when they
speak, which can lead to a decrease in the quality of the gen-
Most people just laughed
face
my
at
two men came up
Suddenly
to me
Figure 1. Gesture examples generated by our proposed method
on various types of speech. The character is from Mixamo [2].
erated gestures. 2) Inherent asynchronicity with speech :
Unlike speech with face or lips, there is an inherent asyn-
chronous relationship between human speech and gestures.
Most existing gesture generation studies intend to solve
the two challenges in a single ingeniously designed neural
network that directly maps speech to 3D joint sequence in
high-dimensional continuous space [18, 24, 27, 31] using a
sliding window with a fixed step size [17, 46, 47]. How-
ever, such methods are limited by the representation power
of proposed neural networks, like the GENEA gesture-
generation challenge results. No system in GENEA chal-
lenge 2020 [26] rated above a bottom line that paired the in-
put speech audio with mismatched excerpts of training data
motion. In GENEA challenge 2022 [48], a motion match-
ing based method [50] ranked first in the human-likeness
evaluation and upper-body appropriateness evaluation, and
outperformed all other neural network-based models. These
results indicate that motion matching based models, if de-
signed properly, are more effective than neural network
based models.
Inspired by this observation, in this work, we propose
a novel quantization-based motion matching framework for
audio-driven gesture generation. Our framework includes
two main components aiming at solving the two above chal-
lenges, respectively. First, to address the random jittering
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2321
AudioAudio Cand.
TextćThe book
is openĊĈ
34
96ĊQuan.
Seed poseQuan.
72
511Ċ
784
37ĊMotion
Matching
Speech-gesture databaseText Cand.Phase
GuidanceSeed Phase
Text matching gesture
Audio matching gesture
Figure 2. Gesture generation pipeline of our proposed framework. ‘Quan.’ is short for ‘quantization’ and ‘Cand.’ is short for
‘candidate’. Given a piece of audio, text and seed pose, the audio and gesture are quantized. The candidate for the speech is calculated
based on the Levenshtein distance, and the candidate for the text is calculated based on the cosine similarity. The optimal gesture is selected
based on phase-guidance corresponding to the seed code and the phase corresponding to the two candidates.
challenge, we compress human gestures into a space that is
lower dimensional and discrete, to reduce input redundancy.
Instead of manually indicating the gesture units [23], we use
a vector quantized variational autoencoder (VQ-V AE) [42]
to encode and quantize joint sequences to a codebook in
an unsupervised manner, using a quantization bottleneck.
Each learned code is shown to represent a unique gesture
pose. By reconstructing the discrete gestures, some ran-
dom jittering problems such as grabbing hands and push-
ing glasses will be solved. Second, to address the inherent
asynchronicity of speech and gestures, Levenshtein distance
[28] is used based on audio quantization. Levenshtein dis-
tance helps match more appropriate gestures with speech,
and solves the alignment problem of speech and gestures
well. Moreover, unlike the recent gesture matching mod-
els [17, 50], we also consider the semantic information of
the context. Third, since the body motion is composed of
multiple periodic motions spatially, meanwhile the phase
values are able to describe the nonlinear periodicity of the
high-dimensional motion curves well [39], we use phase to
guide how the gestures should be matched to speech and
text.
The inference procedure of our framework is shown in
Figure 2. Given a piece of audio, text and seed pose, the
audio and gesture are first quantized. The best candidate
for the speech is calculated based on the Levenshtein dis-
tance, and the best candidate for the text is calculated based
on the cosine similarity. Then the most optimal gesture is
selected based on the phase corresponding to the seed code
and the phase corresponding to the two candidates. Our
code, database, pre-trained models and demos will be pub-
licly available soon.
The main contributions of our work are:
• We present a novel quantization-based motion match-
ing framework for speech-driven gesture generation.• We propose to align diverse gestures with different
speech using Levenshtein distance, based on audio
quantization.
• We design a phase guidance strategy to select optimal
audio and text candidates for motion matching.
• Extensive experiments show that jittering and asyn-
chronicity issues can be effectively alleviated by our
framework.
|
Ye_DeepSolo_Let_Transformer_Decoder_With_Explicit_Points_Solo_for_Text_CVPR_2023 | Abstract
End-to-end text spotting aims to integrate scene text de-
tection and recognition into a unified framework. Deal-
ing with the relationship between the two sub-tasks plays
a pivotal role in designing effective spotters. Although
Transformer-based methods eliminate the heuristic post-
processing, they still suffer from the synergy issue between
the sub-tasks and low training efficiency. In this paper, we
present DeepSolo , a simple DETR-like baseline that lets
a single Decoder with Explicit Points Solo for text detec-
tion and recognition simultaneously. Technically, for each
text instance, we represent the character sequence as or-
dered points and model them with learnable explicit point
queries. After passing a single decoder, the point queries
have encoded requisite text semantics and locations, thus
can be further decoded to the center line, boundary, script,
and confidence of text via very simple prediction heads in
parallel. Besides, we also introduce a text-matching cri-
terion to deliver more accurate supervisory signals, thus
enabling more efficient training. Quantitative experiments
on public benchmarks demonstrate that DeepSolo outper-
forms previous state-of-the-art methods and achieves better
training efficiency. In addition, DeepSolo is also compati-
ble with line annotations, which require much less annota-
tion cost than polygons. The code is available at https:
//github.com/ViTAE-Transformer/DeepSolo .
| 1. Introduction
Detecting and recognizing text in natural scenes, a.k.a.
text spotting, has drawn increasing attention due to its wide
range of applications [5, 34, 56, 59], such as autonomous
driving [57] and intelligent navigation [7]. How to deal with
the relationship between detection and recognition is a long-
*Equal contribution. †Corresponding author. This work was done dur-
ing Maoyuan Ye’s internship at JD Explore Academy.
Feature Extraction Spotting
(a) RoI -based
(b) Seg -based
TrDec. (c) Ours
TrDec.
Generic Object QueryLinear, RNN, MLP, DeConv
TrDec.
Point Query+Point Prior+Linear#1 & #2, MLP# 1 & #2
(d) TTS (e) TESTR (f) OursCNN
+FPNIMAGE
IMAGECNN
+FPN
CNN
+TrEnc.Detector
Seg.
Seg. Char
...Connector Recognizer
Grouping
separately trained Char Query+1D EncodingTrDec.#2
+Linear#2
TrDec.#2
+Linear#2
Char Query+1D EncodingTrDec.#2
+Linear#2
TrDec.#1
Point Query+Box Prior+Linear#1, MLP
TrDec.#1
Point Query+Box Prior+Linear#1, MLPIMAGEFigure 1. Comparison of pipelines and query designs. TrEnc.
(TrDec.): Transformer encoder (decoder). Char: character.
standing problem in designing the text spotting pipeline and
has a significant impact on structure design, spotting perfor-
mance, training efficiency, and annotation cost, etc.
Most pioneering end-to-end spotting methods [16,25,28,
29,31,37,43,44,51] follow a detect-then-recognize pipeline,
which first detects text instances and then exploits Region-
of-Interest (RoI) based connectors to extract features within
the detected area, finally feeds them into the following rec-
ognizer (Fig. 1a). Although these methods have achieved
great progress, there are two main limitations. 1) An extra
connector for feature alignment is indispensable. Moreover,
some connectors require polygon annotations, which are
not applicable when only weak annotations are available.
2) Additional efforts are desired to address the synergy is-
sue [17,62] between the detection and recognition modules.
In contrast, the segmentation-based methods [49, 53] try to
isolate the two sub-tasks and complete spotting in a paral-
lel multi-task framework with a shared backbone (Fig. 1b).
Nevertheless, they are sensitive to noise and require group-
ing post-processing to gather unstructured components.
Recently, Transformer [47] has improved the perfor-
mance remarkably for various computer vision tasks [9, 10,
23, 32, 33, 40, 46, 50, 54, 60], including text spotting [17, 21,
39, 61]. Although the spotters [21, 61] based on DETR [3]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19348
can get rid of the connectors and heuristic post-processing,
they lack efficient joint representation to deal with scene
text detection and recognition, e.g., requiring an extra RNN
module in TTS [21] (Fig. 1d) or exploiting individual Trans-
former decoder for each sub-task in TESTR [61] (Fig. 1e).
The generic object query exploited in TTS fails to consider
the unique characteristics of scene text, e.g., location and
shape. While TESTR uses point queries with box positional
prior that is coarse for point predicting, and the queries are
different for detection and recognition, introducing unex-
pected heterogeneity. Consequently, these designs have a
side effect on the performance and training efficiency [55].
In this paper, we propose a novel query form based on
explicit point representations of text lines. Built upon it,
we present a succinct DETR-like baseline that lets a sin-
gleDecoder with Explicit Points Solo (dubbed DeepSolo )
for detection and recognition simultaneously (Fig. 1c and
Fig. 1f). Technically, for each instance, we first represent
the character sequence as ordered points, where each point
has explicit attributes of position, offsets to the top and bot-
tom boundary, and category. Specifically, we devise top- K
Bezier center curves to fit scene text instances with arbitrary
shape and sample a fixed number of on-curve points cov-
ering characters in each text instance. Then, we leverage
the sampled points to generate positional queries and guide
the learnable content queries with explicit positional prior.
Next, we feed the image features from the Transformer en-
coder and the point queries into a single Transformer de-
coder, where the output queries are expected to have en-
coded requisite text semantics and locations. Finally, we
adopt several very simple prediction heads (a linear layer or
MLP) in parallel to decode the queries into the center line,
boundary, script, and confidence of text, thereby solving de-
tection and recognition simultaneously.
In summary, the main contributions are three-fold: 1)
We propose DeepSolo, i.e., a succinct DETR-like baseline
with a single Transformer decoder and several simple pre-
diction heads, to solve text spotting efficiently. 2)We pro-
pose a novel query form based on explicit points sampled
from the Bezier center curve representation of text instance
lines, which can efficiently encode the position, shape, and
semantics of text, thus helping simplify the text spotting
pipeline. 3)Experimental results on public datasets demon-
strate that DeepSolo is superior to previous representative
methods in terms of spotting accuracy, training efficiency,
and annotation flexibility.
|
Yu_MonoHuman_Animatable_Human_Neural_Field_From_Monocular_Video_CVPR_2023 | Abstract
Animating virtual avatars with free-view control is cru-
cial for various applications like virtual reality and digi-
tal entertainment. Previous studies have attempted to uti-
lize the representation power of the neural radiance field
(NeRF) to reconstruct the human body from monocular
videos. Recent works propose to graft a deformation net-
work into the NeRF to further model the dynamics of the hu-
man neural field for animating vivid human motions. How-
ever, such pipelines either rely on pose-dependent repre-
sentations or fall short of motion coherency due to frame-
independent optimization, making it difficult to generalize
to unseen pose sequences realistically. In this paper, we
propose a novel framework MonoHuman , which robustly
renders view-consistent and high-fidelity avatars under ar-
bitrary novel poses. Our key insight is to model the de-
formation field with bi-directional constraints and explic-
itly leverage the off-the-peg keyframe information to reason
the feature correlations for coherent results. Specifically,
we first propose a Shared Bidirectional Deformation mod-
ule, which creates a pose-independent generalizable defor-
mation field by disentangling backward and forward defor-
mation correspondences into shared skeletal motion weight
and separate non-rigid motions. Then, we devise a Forward
Correspondence Search module, which queries the corre-
spondence feature of keyframes to guide the rendering net-work. The rendered results are thus multi-view consistent
with high fidelity, even under challenging novel pose set-
tings. Extensive experiments demonstrate the superiority of
our proposed MonoHuman over state-of-the-art methods.
| 1. Introduction
Rendering a free-viewpoint photo-realistic view synthe-
sis of a digital avatar with explicit pose control is an im-
portant task that will bring benefits to AR/VR applica-
tions, virtual try-on, movie production, telepresence, etc.
However, previous methods [32, 33, 58] usually require
carefully-collected multi-view videos with complicated sys-
tems and controlled studios, which limits the usage in gen-
eral and personalized scenarios applications. Therefore,
though challenging, it has a significant application value
todirectly recover and animate the digital avatar from a
monocular video.
Previous rendering methods [33] can synthesize realis-
tic novel view images of the human body, but hard to ani-
mate the avatar in unseen poses. To address this, some re-
cent methods deform the neural radiance fields (NeRF [27])
[32, 49] to learn a backward skinning weight of paramet-
ric models depending on the pose or individual frame in-
dex. They can animate the recovered human in novel poses
with small variations to the training set. However, as the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16943
blending weights are pose-dependent, these methods usu-
ally over-fit to the seen poses of training data, therefore,
lack of generalizability [49]. Notably, one category of ap-
proaches [5, 21] solves this problem by learning a pose-
independent forward blend weight in canonical space and
using the root-finding algorithm to search the backward cor-
respondence. However, the root-finding algorithm is time-
consuming. [4] proposes to add a forward mapping network
to help the learning of backward mapping, though with con-
sistent constraint, their backward warping weights still are
frame dependent. Some other works [16,32,42] leverage the
blend weight from the template model like SMPL [25]. The
accuracy of their deformation heavily hinges on the tem-
plate model and always fails in the cloth parts as SMPL
does not model it. How to learn an accurate and generaliz-
able deformation field is still an open problem.
To reconstruct and animate a photo-realistic avatar that
can generalize to unseen poses from the monocular video,
we conclude three key observations to achieve generaliz-
ability from recent studies: 1)The deformation weight
field should be defined in canonical space and as pose-
independent as possible [5,48,49]; 2)An ideal deformation
field should unify forward and backward deformation to al-
leviate ambiguous correspondence on novel poses; 3)Direct
appearance reference from input observation helps improve
the fidelity of rendering.
We propose MonoHuman , a novel framework that en-
joys the above strengths to reconstruct an animatable digi-
tal avatar from only monocular video. Concretely, we show
how to learn more correct and general deformation from
such limited video data and how to render realistic results
from deformed points in canonical space. We first intro-
duce a novel Shared Bidirectional Deformation module to
graft into the neural radiance fields, which disentangles the
backward and forward deformation into one shared skeletal
motion and two separate residual non-rigid motions. The
shared motion basis encourages the network to learn more
general rigid transformation weights. The separate residual
non-rigid motions guarantee the expressiveness of the accu-
rate recovery of pose-dependent deformation. With the ac-
curate learned deformation, we further build an observation
bank consisting of information from sparse keyframes, and
present a Forward Correspondence Search module to search
observation correspondence from the bank which helps cre-
ate high fidelity and natural human appearance, especially
the invisible part in the current frame. With the above de-
signs, our framework can synthesize a human at any view-
point and any pose with natural shape and appearance.
To summarize, our main contributions are three-fold:
• We present a new approach MonoHuman that can syn-
thesize the free viewpoint and novel pose sequences of
a performer with explicit pose control, only requiring
a monocular video as supervision.• We propose the Shared Bidirectional Deformation
module to achieve generalizable consistent forward
and backward deformation, and the Forward Search
Correspondence module to query correspondence ap-
pearance features to guide the rendering step.
• Extensive experiments demonstrate that our frame-
work MonoHuman renders high-fidelity results and
outperforms state-of-the-art methods.
|
Yang_Neural_Vector_Fields_Implicit_Representation_by_Explicit_Learning_CVPR_2023 | Abstract
Deep neural networks (DNNs) are widely applied for
nowadays 3D surface reconstruction tasks and such meth-
ods can be further divided into two categories, which re-
spectively warp templates explicitly by moving vertices or
represent 3D surfaces implicitly as signed or unsigned dis-
tance functions. Taking advantage of both advanced ex-
plicit learning process and powerful representation abil-
ity of implicit functions, we propose a novel 3D repre-
sentation method, Neural Vector Fields (NVF). It not only
adopts the explicit learning process to manipulate meshes
directly, but also leverages the implicit representation of
unsigned distance functions (UDFs) to break the barri-
ers in resolution and topology. Specifically, our method
first predicts the displacements from queries towards the
surface and models the shapes as Vector Fields. Rather
than relying on network differentiation to obtain direction
fields as most existing UDF-based methods, the produced
vector fields encode the distance and direction fields both
and mitigate the ambiguity at “ridge” points, such that
the calculation of direction fields is straightforward and
differentiation-free. The differentiation-free characteristic
enables us to further learn a shape codebook via Vector
Quantization, which encodes the cross-object priors, accel-
erates the training procedure, and boosts model generaliza-
tion on cross-category reconstruction. The extensive exper-
iments on surface reconstruction benchmarks indicate that
our method outperforms those state-of-the-art methods in
different evaluation scenarios including watertight vs non-
watertight shapes, category-specific vs category-agnostic
reconstruction, category-unseen reconstruction, and cross-
domain reconstruction. Our code is released at https:
//github.com/Wi-sc/NVF .
| 1. Introduction
Reconstructing continuous surfaces from unstructured,
discrete and sparse point clouds is an emergent but non-
trivial task in nowadays robotics, vision and graphics appli-
cations, since the point clouds are hard to be deployed into
(b) Mesh
(c) Voxel/Occupancy
(e)
Unsigned Distance
(f) Vector Fields
(a) Point Cloud
(d)
Signed Distance
(g) Mesh Deformation via Vector FieldsFigure 1. Common 3D representations. Explicit representations:
(a) point clouds, (b) meshes, (c) voxels. Implicit representa-
tions: (c) occupancy, (d) reconstruction from the signed distance
functions, and (e) reconstruction from unsigned distance func-
tions. Our method represents continuous surfaces through (f) vec-
tor fields. (g) Vector fields can deform meshes (red) as explicit
representation methods.
the downstream applications without recovering to high-
resolution surfaces [5, 7, 38, 42].
With the tremendous success of Deep Neural Networks
(DNNs), a few DNN-based surface reconstruction meth-
ods have already achieved promising reconstruction perfor-
mance. These methods can be roughly divided into two cat-
egories according to whether their output representations
are explicit or implicit. As shown in Fig. 1, explicit rep-
resentation methods including mesh andvoxel based ones
denote the exact location of a surface, which learn to warp
templates [3, 4, 19, 26, 29, 68] or predict voxel grids [10, 30,
59]. Explicit representations are friendly to downstream ap-
plications, but they are usually limited by resolution and
topology. On the other hand, implicit representations such
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16727
asOccupancy andSigned Distance Functions (SDFs) repre-
sent the surface as an isocontour of a scalar function, which
receives increasing attention due to their capacity to repre-
sent surfaces with more complicated topology and at arbi-
trary resolution [12, 14, 22, 35, 44, 48]. However, most im-
plicit representation methods usually require specific pre-
processing to close non-watertight meshes and remove in-
ner structures. To free from the above pre-processing re-
quirements for implicit representation, Chibane et al. [15]
introduced Neural Unsigned Distance Fields (NDF) , which
employs the Unsigned Distance Functions (UDFs) for neu-
ral implicit functions (NIFs) and models continuous sur-
faces by predicting positive scalar between query locations
and the target surface. Despite certain advantages, UDFs
require a more complicated surface extraction process than
other implicit representation methods ( e.g., SDFs). Such a
process using Ball-Pivoting Algorithm [5] or gradient-based
Marching Cube [28,83] relies on model differentiation dur-
ing inference ( i.e., differentiation-dependent). Moreover,
UDFs leave gradient ambiguities at “ridge” points, where
the gradients1used for surface extraction cannot accurately
point at target points as illustrated by Fig. 2a.
In this work, we propose a novel 3D representation
method, Neural Vector Fields (NVF) , which leverages the
explicit learning process of direct manipulation on meshes
and the implicit representation of UDFs to enjoy the ad-
vantages of both approaches. That is, NVF can directly
manipulate meshes as those explicit representation meth-
ods as Fig. 1g, while representing the shapes with arbi-
trary resolution and topology as those implicit represen-
tation methods. Specifically, NVF models the 3D shapes
as vector fields and computes the displacement between a
pointq∈R3and its nearest-neighbor point on the surface
ˆ q∈R3by using a learned function f(q) =∆q=ˆ q−q:
R3⇒R3. Therefore, NVF could serve both as an implicit
function and an explicit deformation function, since the dis-
placement output of the function could be directly used to
deform source meshes ( i.e., Fig. 1g). In general, it encodes
both distance and direction fields within vector fields, which
can be straightforwardly obtained from the vector fields.
Different from existing UDF-based methods, our NVF
representation avoids the comprehensive inference process
by skipping the gradient calculation during the surface ex-
traction procedure1, and mitigates ambiguities by directly
learning displacements as illustrated by Fig. 2b. Such
one-pass forward-propagation nature frees NVF from dif-
ferentiation dependency, significantly reduces the infer-
ence time and memory cost, and allows our model to
learn a shape codebook consisting of un-differentiable dis-
crete shape codes in the embedded feature space. The
1Learning-based methods calculate the gradients of distance fields via
model differentiation. The opposite direction of gradients should point to
the nearest-neighbor point on the target surface.
𝒅
𝒅
Surface
Gradient
Query(a) NDF [15].
Surface
Vector
Query (b) NVF.
Figure 2. Gradient ambiguities. (a) NDF [15] cannot guarantee
to pull points onto surfaces ( i.e., ambiguity of gradient), while (b)
our NVF address this issue by direct displacement learning.
learned shape codebook further provides cross-object priors
to consequently improve the model generalization on cross-
category reconstruction, and accelerates the training proce-
dure as a regularization term during training. We use VQ
as an example to demonstrate that the differentiation-
free property of NVF provides more flexibility in model
design in this paper.
We conduct extensive experiments on two surface
reconstruction benchmark datasets: a synthetic dataset
ShapeNet [8] and a real scanned dataset MGN [6]. Besides
category-specific reconstruction [15,76] as demonstrated in
most reconstruction methods, we also evaluate our frame-
work by category-agnostic reconstruction, category-unseen
reconstruction, and cross-domain reconstruction tasks to
exploit the model generalization. Our experimental results
indicate that our NVF can significantly reduce the inference
time compared with other UDF-based methods as we avoid
the comprehensive surface extraction step and circumvent
the requirement of gradient calculation at query locations.
Also, using the shape codebook, we observe a significant
performance improvement and a better model generaliza-
tion across categories.
Our contributions are summarized as follows.
• We propose a 3D representation NVF for better 3D
field representation, which bridges the explicit learn-
ing and implicit representations, and benefits from
both of their advantages. Our method can obtain the
displacement of a query location in a differentiation-
free way, and thus it significantly reduces the infer-
ence complexity and provides more flexibility in de-
signing network structures which may include non-
differentiable components.
• Thanks to our differentiation-free design, we further
propose a learned shape codebook in the feature space,
which uses VQ strategy to provide cross-object priors.
In this way, each query location is encoded as a com-
position of discrete codes in feature space and further
used to learn the NVF.
• We conduct the extensive experiments to evaluate the
effectiveness of our proposed method. It consistently
shows promising performance on two benchmarks
16728
across different evaluation scenarios: water-tight vs
non-water-tight shapes, category-specific vs category-
agnostic reconstruction, category-unseen reconstruc-
tion, and cross-domain reconstruction.
|
Zhang_DA-DETR_Domain_Adaptive_Detection_Transformer_With_Information_Fusion_CVPR_2023 | Abstract
The recent detection transformer (DETR) simplifies the
object detection pipeline by removing hand-crafted designs
and hyperparameters as employed in conventional two-
stage object detectors. However, how to leverage the sim-
ple yet effective DETR architecture in domain adaptive ob-
ject detection is largely neglected. Inspired by the unique
DETR attention mechanisms, we design DA-DETR, a do-
main adaptive object detection transformer that introduces
information fusion for effective transfer from a labeled
source domain to an unlabeled target domain. DA-DETR
introduces a novel CNN-Transformer Blender (CTBlender)
that fuses the CNN features and Transformer features inge-
niously for effective feature alignment and knowledge trans-
fer across domains. Specifically, CTBlender employs the
Transformer features to modulate the CNN features across
multiple scales where the high-level semantic information
and the low-level spatial information are fused for accu-
rate object identification and localization. Extensive experi-
ments show that DA-DETR achieves superior detection per-
formance consistently across multiple widely adopted do-
main adaptation benchmarks.
| 1. Introduction
Object detection aims to predict a bounding box and
a class label for interested objects in images and it has
been a longstanding challenge in the computer vision re-
search. Most existing work adopts a two-stage detection
pipeline that involves heuristic anchor designs, complicated
post-processing such as non-maximum suppression (NMS),
etc. The recent detection transformer (DETR) [5] has at-
tracted increasing attention which greatly simplifies the
two-stage detection pipeline by removing hand-crafted an-
chors [21, 22, 49] and NMS [21, 22, 49]. Despite its great
detection performance under a fully supervised setup, how
to leverage the simple yet effective DETR architecture in
domain adaptive object detection is largely neglected.
*Equal contribution, {jingyi.zhang, jiaxing.huang }@ntu.edu.sg.
†Corresponding author, [email protected].
Figure 1. The vanilla Deformable-DETR [81] trained with la-
beled source data cannot handle target data well due to cross-
domain shift. The introduction of adversarial feature alignment
inDeformable-DETR + Direct-align [19] improves the detection
clearly. The proposed DA-DETR fuses CNN features and trans-
former features ingeniously which achieves superior unsupervised
domain adaptation consistently across four widely adopted bench-
marks including Cityscapes →Foggy cityscapes in (a), SIM 10k
→Cityscapes in (b), KITTI →Cityscapes in (c) and PASCAL
VOC→Clipart1k in (d).
Different from the conventional CNN-based detection
architectures such as Faster RCNN [49], DETR has a CNN
backbone followed by a transformer head consisting of an
encoder-decoder structure. The CNN backbone and the
transformer head learn different types of features [17,48,69]
- the former largely captures low-level localization features
(e.g., edges and lines around object boundaries) while the
latter largely captures global inter-pixel relationship and
high-level semantic features. At the other end, many prior
studies show that fusing different types of features often
is often helpful in various visual recognition tasks [9, 11].
Hence, it is very meaningful to investigate how to fuse the
two types of DETR features to address the domain adaptive
object detection challenge effectively.
We design DA-DETR, a simple yet effective Domain
Adaptive DETR that introduces information fusion into
the DETR architecture for effective domain adaptive ob-
ject detection. The core design is a CNN-Transformer
Blender (CTBlender) that employs the high-level semantic
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23787
features in the Transformer head to conditionally modulate
the low-level localization features in the CNN backbone.
CTBlender consists of two sequential fusion components,
including split-merge fusion (SMF) that fuses CNN and
Transformer features within an image and scale aggregation
fusion (SAF) that fuses the SMF features across multiple
feature scales. Different from the existing weight-and-sum
fusion [9, 11], SMF first splits CNN features into multiple
groups with different semantic information as captured by
the Transformer head and then merges them with channel
shuffling for effective information communication among
different groups. The SMF features of each scale are then
aggregated by SAF for fusing both semantic and localiza-
tion information across multiple feature scales. Hence, CT-
Blender captures both semantic and localization features in-
geniously which enables comprehensive and effective inter-
domain feature alignment with a single discriminator.
The main contributions of this work can be summarized
in three aspects. First , we propose DA-DETR, a simple yet
effective domain adaptive detection transformer that intro-
duces information fusion for effective domain adaptive ob-
ject detection. To the best of our knowledge, this is the first
work that explores information fusion for domain adaptive
object detection. Second , we design a CNN-Transformer
Blender that fuses the CNN features and Transformer fea-
tures ingeniously for effective feature alignment and knowl-
edge transfer across domains. Third , extensive experi-
ments show that DA-DETR achieves superior object detec-
tion over multiple widely studied domain adaptation bench-
marks as compared with the state-of-the-art as shown in
Fig. 1.
|
Yin_Hi4D_4D_Instance_Segmentation_of_Close_Human_Interaction_CVPR_2023 | Abstract
We propose Hi4D, a method and dataset for the auto-
matic analysis of physically close human-human interaction
under prolonged contact. Robustly disentangling several
in-contact subjects is a challenging task due to occlusions
and complex shapes. Hence, existing multi-view systems
typically fuse 3D surfaces of close subjects into a single,
connected mesh. To address this issue we leverage i) in-
dividually fitted neural implicit avatars; ii) an alternating
optimization scheme that refines pose and surface through
periods of close proximity; and iii) thus segment the fused
raw scans into individual instances. From these instances
we compile Hi4D dataset of 4D textured scans of 20 sub-
ject pairs, 100 sequences, and a total of more than 11K
frames. Hi4D contains rich interaction-centric annotations
in 2D and 3D alongside accurately registered parametric
body models. We define varied human pose and shape esti-
mation tasks on this dataset and provide results from state-
of-the-art methods on these benchmarks. Hi4D dataset can
be found at https://ait.ethz.ch/Hi4D . | 1. Introduction
While computer vision systems have made rapid
progress in estimating the 3D body pose and shape of
individuals and well-spaced groups, currently there are
no methods that can robustly disentangle and reconstruct
closely interacting people. This is in part due to the lack of
suitable datasets. While some 3D datasets exist that contain
human-human interactions, like ExPI [72] and CHI3D [22],
they typically lack high-fidelity dynamic textured geometry,
do not always provide registered parametric body models
and do not always provide rich contact information and are
therefore not well suited to study closely interacting people.
Taking a first step towards future AI systems that are
able to interpret the interactions of multiple humans in close
physical interaction and under strong occlusion, we pro-
pose a method and dataset that enables the study of this
new setting. Specifically, we propose Hi4D, a compre-
hensive dataset that contains segmented, yet complete 4D
textured geometry of closely interacting humans, along-
side corresponding registered parametric human models, in-
stance segmentation masks in 2D and 3D, and vertex-level
contact annotations (see Fig. 1). To enable research to-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17016
Dataset Multi-view Images TemporalReference Data Modalities
3D Pose Format Textured Scans Contact Annotations Instance Masks
ShakeFive2 [23] ✓ Joint Positions
MuPoTS-3D [53] ✓ ✓ Joint Positions
ExPI [72] ✓ ✓ Joint Positions ✓
MultiHuman [77] Parametric Body Model†✓
CHI3D [22] ✓ ✓ Parametric Body Model region-level (631 events)
Hi4D (Ours) ✓ ✓ Parametric Body Model ✓ vertex-level ( >6K events) ✓
Table 1. Comparison of datasets containing close human interaction.†In [77] registrations are not considered as ground-truth.
wards automated analysis of close human interactions, we
contribute experimental protocols for computer vision tasks
that are enabled by Hi4D.
Capturing such a dataset and the corresponding anno-
tations is a very challenging endeavor in itself. While
multi-view, volumetric capture setups can reconstruct high-
quality 4D textured geometry of individual subjects, even
modern multi-view systems typically fuse 3D surfaces of
spatially proximal subjects into a single, connected mesh
(see Fig. 1, A). Thus deriving and maintaining complete,
per subject 4D surface geometry, parametric body registra-
tion, and contact information from such reconstructions is
non-trivial. In contrast to the case of rigid objects, simple
tracking schemes fail due to very complex articulations and
thus strong changes in terms of geometry. Moreover, con-
tact itself will further deform the shape.
To address these problems, we propose a novel method
to track and segment the 4D surface of multiple closely in-
teracting people through extended periods of dynamic phys-
ical contact. Our key idea is to make use of emerging neu-
ral implicit surface representations for articulated shapes,
specifically SNARF [13], and create personalized human
avatars of each individual (see Fig. 2, A). These avatars
then serve as strong personalized priors to track and thus
segment the fused geometry of multiple interacting people
(see Fig. 2, B). To this end, we alternate between pose opti-
mization and shape refinement (see Fig. 3). The optimized
pose and refined surfaces yield precise segmentations of the
merged input geometry. The tracked 3D instances (Fig. 1,
B) then provide 2D and 3D instance masks (Fig. 1, C),
vertex-level contact annotations (Fig. 1, B), and can be used
to register parametric human models (Fig. 1, D).
Equipped with this method, we capture Hi4D, which
stands for H umans i nteracting in 4D , a dataset of humans
in close physical interaction alongside high-quality 4D an-
notations. The dataset contains 20 pairs of subjects (24
male, 16 female), and 100 sequences with more than 11K
frames. To our best knowledge, ours is the first dataset con-
taining rich interaction-centric annotations and high-quality
4D textured geometry of closely interacting humans.
To provide baselines for future work, we evaluate several
state-of-the-art methods for multi-person pose and shape
modeling from images on Hi4D in different settings suchas monocular and multi-view human pose estimation and
detailed geometry reconstruction. Our baseline experi-
ments show that our dataset provides diverse and challeng-
ing benchmarks, opening up new directions for research. In
summary, we contribute:
• A novel method based on implicit avatars to track and
segment 4D scans of closely interacting humans.
• Hi4D, a dataset of 4D textured scans with correspond-
ing multi-view RGB images, parametric body models,
instance segmentation masks and vertex-level contact.
• Several experimental protocols for computer vision
tasks in the close human interaction setting.
|
Xu_Uncovering_the_Missing_Pattern_Unified_Framework_Towards_Trajectory_Imputation_and_CVPR_2023 | Abstract
Trajectory prediction is a crucial undertaking in under-
standing entity movement or human behavior from observed
sequences. However, current methods often assume that
the observed sequences are complete while ignoring the
potential for missing values caused by object occlusion,
scope limitation, sensor failure, etc. This limitation in-
evitably hinders the accuracy of trajectory prediction. To
address this issue, our paper presents a unified framework,
the Graph-based Conditional Variational Recurrent Neural
Network (GC-VRNN), which can perform trajectory impu-
tation and prediction simultaneously. Specifically, we in-
troduce a novel Multi-Space Graph Neural Network (MS-
GNN) that can extract spatial features from incomplete ob-
servations and leverage missing patterns. Additionally, we
employ a Conditional VRNN with a specifically designed
Temporal Decay (TD) module to capture temporal depen-
dencies and temporal missing patterns in incomplete trajec-
tories. The inclusion of the TD module allows for valuable
information to be conveyed through the temporal flow. We
also curate and benchmark three practical datasets for the
joint problem of trajectory imputation and prediction. Ex-
tensive experiments verify the exceptional performance of
our proposed method. As far as we know, this is the first
work to address the lack of benchmarks and techniques for
trajectory imputation and prediction in a unified manner.
| 1. Introduction
Modeling and predicting future trajectories play an in-
dispensable role in various applications, i.e., autonomous
driving [23, 25, 65], motion capture [57, 59], behavior un-
derstanding [20, 38], etc. However, accurately predicting
movement patterns is challenging due to their complex and
*Work done during Yi’s internship at Honda Research Institute, under
Chiho Choi’s supervision.
𝑻𝑻=𝒕𝒕𝟎𝟎
𝑻𝑻=𝒕𝒕𝟐𝟐
𝑻𝑻 𝒕𝒕𝟏𝟏𝒕𝒕𝟐𝟐 𝒕𝒕𝟎𝟎 𝒕𝒕𝒑𝒑
𝑻𝑻=𝒕𝒕𝟏𝟏Figure 1. A typical example of incomplete observed trajectory.
The black car (ego-vehicle) is waiting at the intersection at time
stept0, and the yellow car is moving. At time step t1, the red
car is occluded by the yellow car, and the blue car appears. The
bottom right figure indicates the “visibility” of four cars, where
dark means visible and light color means invisible.
subtle nature. Despite significant attention and numerous
proposed solutions [5, 34, 62, 67–69] to the trajectory pre-
diction problem, existing methods often assume that agent
observations are entirely complete, which is too strong an
assumption to satisfy in practice.
For example, in sports games such as football or soccer,
not all players are always visible in the live view due to the
limitation of the camera view. In addition, the live camera
always tracks and moves along the ball, resulting in some
players appearing and disappearing throughout the view,
depending on their relative locations to the ball. Fig. 2 illus-
trates this common concept. Similar situations also arise in
autonomous driving where occlusion or sensor failure can
cause missing data. As illustrated in Fig. 1, at time step
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9632
𝑻𝑻=𝟏𝟏 𝑻𝑻=𝟐𝟐 𝑻𝑻=𝟑𝟑Figure 2. Three continuous frames of a “throw-and-catch” sequence in a live football match, where the ball is being circled. During this
attacking play, the camera follows and zooms in on the ball. Only a subset of players is visible within the camera’s field of view.
t0, there is no observation of the blue car. At time step t1,
the red car is occluded by the yellow car from the black car
perspective, and the blue car appears at the intersection to
turn right. Predicting future trajectories of entities under
these circumstances will no doubt hinder the performance
and negatively influence behavior understanding of moving
agents or vehicle safety operations.
Although various recent works [15,17,37,53,73,74] have
investigated the time series imputation problem, most are
autoregressive models that impute current missing values
from previous time steps, making them highly susceptible
to compounding errors for long-range temporal modeling.
Additionally, commonly used benchmarks [29,50,54,82] do
not contain interacting entities. Some recent works [36, 78]
have studied the imputation problem in a multi-agent sce-
nario. Although these methods have achieved promising
imputation performance, they fail to explore the relevance
between the trajectory imputation and the prediction task.
In fact, complete trajectory observation is essential for pre-
diction, and accurate trajectory prediction can offer valu-
able information about the temporal correlation between
past and future, ultimately aiding in the imputation process.
In this paper, we present a unified framework, Graph-
based Conditional Variational Recurrent Neural Network
(GC-VRNN), that simultaneously handles the trajectory im-
putation and prediction. Specifically, we introduce a novel
Multi-Space Graph Neural Network (MS-GNN) to extract
compact spatial features of incomplete observations. Mean-
while, we adopt a Conditional VRNN (C-VRNN) to model
the temporal dependencies, where a Temporal Decay (TD)
module is designed to learn the missing patterns of incom-
plete observations. The critical idea behind our method is to
acquire knowledge of the spatio-temporal features of miss-
ing patterns, and then unite these two objectives through
shared parameters. Sharing valuable information allows
these two tasks to support and promote one another for bet-
ter performance mutually. In addition, to support the joint
evaluation of multi-agent trajectory imputation and pre-
diction, we curate and benchmark three practical datasets
from different domains, Basketball-TIP ,Football-TIP , and
Vehicle-TIP , where the incomplete trajectories are gener-
ated via reasonable and practical strategies. The main con-tributions of our work can be summarized as follows:
• We investigate the multi-agent trajectory imputation
and prediction problem and develop a unified frame-
work, GC-VRNN, for imputing missing observations
and predicting future trajectories simultaneously.
• We propose a novel MS-GNN that can extract com-
prehensive spatial features of incomplete observations
and adopt a C-VRNN with a specifically designed TD
module for better learning temporal missing patterns,
and valuable information is shared via temporal flow.
• We curate and benchmark three datasets for the multi-
agent trajectory imputation and prediction problem.
Strong baselines are set up for this joint problem.
• Thorough experiments verify the consistent and excep-
tional performance of our proposed method.
|
Yuan_Devil_Is_in_the_Queries_Advancing_Mask_Transformers_for_Real-World_CVPR_2023 | Abstract
Real-world medical image segmentation has tremen-
dous long-tailed complexity of objects, among which tail
conditions correlate with relatively rare diseases and are
clinically significant. A trustworthy medical AI algo-
rithm should demonstrate its effectiveness on tail condi-
tions to avoid clinically dangerous damage in these out-of-
distribution (OOD) cases. In this paper, we adopt the con-
cept of object queries in Mask Transformers to formulate
semantic segmentation as a soft cluster assignment. The
queries fit the feature-level cluster centers of inliers dur-
ing training. Therefore, when performing inference on a
medical image in real-world scenarios, the similarity be-
tween pixels and the queries detects and localizes OOD re-
gions. We term this OOD localization as MaxQuery. Fur-
thermore, the foregrounds of real-world medical images,
whether OOD objects or inliers, are lesions. The differ-
ence between them is less than that between the foreground
and background, possibly misleading the object queries to
focus redundantly on the background. Thus, we propose
a query-distribution (QD) loss to enforce clear boundaries
between segmentation targets and other regions at the query
level, improving the inlier segmentation and OOD indi-
cation. Our proposed framework is tested on two real-
world segmentation tasks, i.e., segmentation of pancreatic
and liver tumors, outperforming previous state-of-the-art
algorithms by an average of 7.39% on AUROC, 14.69% on
AUPR, and 13.79% on FPR95 for OOD localization. On
the other hand, our framework improves the performance
of inlier segmentation by an average of 5.27% DSC when
compared with the leading baseline nnUNet.
| 1. Introduction
Image segmentation is a fundamental task in medical im-
age analysis. With the recent advancements in computer
* Corresponding author. ([email protected])
†Work was done during an internship at Alibaba DAMO Academy
IPMN
PDAC
SCN
CP
MCN
SPT
PNETCommon Pancreatic DiseasesIn-distribution
AC
DC
Peri-pancreatic
Diseases
…Rare DiseasesUnseen Diseases(Near) Out-of-distribution
Real -world Trustworthy Medical AIData Distribution
Inlier Segmentation
OOD Localization
Figure 1. Real-world medical image segmentation. Real-world
medical outliers (unseen, usually rare, tumors) are “near” to the
inliers (labeled lesions), forming a typical near-OOD problem. A
real-world medical OOD detection/localization model should fo-
cus more on subtle differences between outliers and inliers than
the significant difference between foreground and background..
vision and deep learning, automated medical image seg-
mentation has reached expert-level performance in various
applications [3, 28, 54]. Most medical image segmenta-
tion methods are based on supervised machine learning that
heavily relies on collecting and annotating training data.
However, real-world medical images are long-tailed dis-
tributed. The tail conditions are outliers and inadequate (or
even unable) to train a reliable model [35, 61, 63]. Yet, the
model trained with inliers is risky for triggering failures or
errors in real-world clinical deployment [43]. For exam-
ple, in pancreatic tumor image analysis, a miss-detection
of metastatic cancer will directly threaten life; an erroneous
recognition of a benign cyst as malignant will lead to unnec-
essary follow-up tests and patient anxiety. Medical image
segmentation models should thus demonstrate the ability
to detect and localize out-of-distribution (OOD) conditions,
especially in some safety-critical clinical applications.
Previous studies have made valuable attempts on med-
ical OOD localization [48, 65], including finding lesions
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23879
apart from normal cases or simulating OOD conditions for
model validation. However, the real-world clinical scenario,
such as tumor segmentation, is more complex, where either
in-distribution or OOD cases have multiple types of tumors.
Establishing a direct relationship between image pixels and
excessive semantics (types of tumors) is difficult for real-
world medical image segmentation. Using this relationship
to distinguish inliers and outliers is even more challenging.
Fortunately, several works about Mask Transformers [5, 9]
have inspired us to split segmentation as a two-stage pro-
cess of per-pixel cluster assignment and cluster classifica-
tion [57, 58]. A well-defined set of inlier clusters may
greatly benefit in identifying the OOD conditions from the
medical images. Therefore, we propose MaxQuery, a med-
ical image semantic segmentation framework that advances
Mask Transformers to localize OOD targets. The frame-
work adopts learnable object queries to iteratively fit inlier
cluster centers. Since the affinity between OODs and an
inlier cluster center should be less than that within the clus-
ter (between inliers and cluster centers), MaxQuery uses the
negative of such affinity as an indicator to detect OODs.
Several recent works further define real-world medical
image OOD localization as a near-OOD problem [43, 51],
where the distribution gaps between inlier and OOD tumors
are overly subtle, as shown in Fig. 1. Thus, the near-OOD
problems are more difficult. Our pilot experiments show
that the cluster centers redundantly represent the large re-
gions of background and organ rather than tumors, compro-
mising the necessary variability of the cluster assignments
for OOD localization. To solve this issue, we propose the
query-distribution (QD) loss to regularize specific quantities
of object queries on background, organ, and tumors. This
enforces the diversity of the cluster assignments, benefiting
the segmentation and recognition of OOD tumors.
We curate two real-world medical image datasets of
(pancreatic and liver) tumor images from 1,088 patients for
image segmentation and OOD localization. Specifically,
we collect consecutive patients’ contrast-enhanced 3D CT
imaging with a full spectrum of tumor types confirmed by
pathology. In these scenarios, the OOD targets are rare tu-
mors and diseases. Our method shows robust performance
across two datasets, significantly outperforming the previ-
ous leading OOD localization methods by an average of
7.39% in AUROC, 14.69% in AUPR, 13.79% in FPR95
for localization, and 3.42% for case-level detection. Mean-
while, our framework also improves the performance of in-
lier segmentation by an average of 5.27% compared with
the strong baseline nnUNet [24].
We summarize our main contributions as follows:
• To the best of our knowledge, we are the first to explore
the near-OOD detection and localization problem in
medical image segmentation. The proposed method
has a strong potential for utility in clinical practice.• We propose a novel approach, MaxQuery, using the
maximum score of query response as a major indicator
for OOD localization.
• A query-distribution (QD) loss is proposed to con-
centrate the queries on important foreground regions,
demonstrating superior effectiveness for near-OOD
problems.
• We curate two medical image datasets for tumor se-
mantic segmentation/detection of real-world OODs.
Our proposed framework substantially outperforms
previous leading OOD localization methods and im-
proves upon the inlier segmentation performance.
|
Ye_PVO_Panoptic_Visual_Odometry_CVPR_2023 | Abstract
We present PVO, a novel panoptic visual odometry frame-
work to achieve more comprehensive modeling of the scene
motion, geometry, and panoptic segmentation information.
Our PVO models visual odometry (VO) and video panop-
tic segmentation (VPS) in a unified view, which makes the
two tasks mutually beneficial. Specifically, we introduce
a panoptic update module into the VO Module with the
guidance of image panoptic segmentation. This Panoptic-
Enhanced VO Module can alleviate the impact of dynamic
objects in the camera pose estimation with a panoptic-aware
dynamic mask. On the other hand, the VO-Enhanced VPS
Module also improves the segmentation accuracy by fusing
the panoptic segmentation result of the current frame on the
fly to the adjacent frames, using geometric information such
as camera pose, depth, and optical flow obtained from the
VO Module. These two modules contribute to each other
through recurrent iterative optimization. Extensive exper-
iments demonstrate that PVO outperforms state-of-the-art
methods in both visual odometry and video panoptic segmen-
tation tasks.
∗indicates equal contribution.†indicates the corresponding author. | 1. Introduction
Understanding the motion, geometry, and panoptic seg-
mentation of the scene plays a crucial role in computer vision
and robotics, with applications ranging from autonomous
driving to augmented reality. In this work, we take a step to-
ward solving this problem to achieve a more comprehensive
modeling of the scene with monocular videos.
Two tasks have been proposed to address this problem,
namely visual odometry (VO) and video panoptic segmen-
tation (VPS). In particular, VO [9, 11, 36] takes monocular
videos as input and estimates the camera poses under the
static scene assumption. To handle dynamic objects in the
scene, some dynamic SLAM systems [2, 43] use instance
segmentation network [14] for segmentation and explicitly
filter out certain classes of objects, which are potentially
dynamic, such as pedestrians or vehicles. However, such ap-
proaches ignore the fact that potentially dynamic objects can
actually be stationary in the scene, such as a parked vehicle.
In contrast, VPS [17, 42, 49] focuses on tracking individual
instances in the scene across video frames given some ini-
tial panoptic segmentation results. Current VPS methods
do not explicitly distinguish whether the object instance is
moving or not. Although existing approaches broadly solve
these two tasks independently, it is worth noticing that dy-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9579
namic objects in the scene can make both tasks challenging.
Recognizing this relevance between the two tasks, some
methods [5, 7, 19, 21] try to tackle both tasks simultaneously
and train motion-semantics networks in a multi-task man-
ner, shown in Fig. 2. However, the loss functions used in
these approaches may contradict each other, thus leading to
performance drops.
In this work, we propose a novel panoptic visual odom-
etry (PVO) framework that tightly couples these two tasks
using a unified view to model the scene comprehensively.
Our insight is that VPS can adjust the weight of VO with
panoptic segmentation information (the weights of the pixels
of each instance should be correlated) and VO can convert
the tracking and fusion of video panoptic segmentation from
2D to 3D. Inspired by the seminal Expectation-Maximization
algorithm [26], recurrent iterative optimization strategy can
make these two tasks mutually beneficial.
Our PVO consists of three modules, an image panoptic
segmentation module, a Panoptic-Enhanced VO Module,
and a VO-Enhanced VPS Module. Specifically, the panoptic
segmentation module (see Sec. 3.1) takes in single images
and outputs the image panoptic segmentation results, which
are then fed into the Panoptic-Enhanced VO Module as ini-
tialization. Note that although we choose PanopticFPN [20],
any segmentation model can be used in the panoptic segmen-
tation module. In the Panoptic-Enhanced VO Module (see
Sec. 3.2), we propose a panoptic update module to filter out
the interference of dynamic objects and hence improve the
accuracy of pose estimation in the dynamic scene. In the
VO-Enhanced VPS Module (see Sec. 3.3), we introduce an
online fusion mechanism to align the multi-resolution fea-
tures of the current frame to the adjacent frames based on the
estimated pose, depth, and optical flow. This online fusion
mechanism can effectively solve the problem of multiple
object occlusion. Experiments show that the recurrent itera-
tive optimization strategy improves the performance of both
VO and VPS. Overall, our contributions are summarized as
four-fold.
•We present a novel Panoptic Visual Odometry (PVO)
framework, which can unify VO and VPS tasks to
model the scene comprehensively.
•A panoptic update module is introduced and incorpo-
rated into the Panoptic-Enhanced VO Module to im-
prove pose estimation.
•An online fusion mechanism is proposed in the VO-
Enhanced VPS Module, which helps to improve video
panoptic segmentation.
•Extensive experiments demonstrate that the proposed
PVO with recurrent iterative optimization is superior
to state-of-the-art methods in both visual odometry and
video panoptic segmentation tasks.
Figure 2. Illustration. Our PVO unifies visual odometry and
video panoptic segmentation so that the two tasks can be mutually
reinforced by iterative optimization. In contrast, methods such as
SimVODIS [19] optimize motion and semantic information in a
multi-task manner.
|
Yang_Paint_by_Example_Exemplar-Based_Image_Editing_With_Diffusion_Models_CVPR_2023 | Abstract
Language-guided image editing has achieved great suc-
cess recently. In this paper , we investigate exemplar-guidedimage editing for more precise control. We achieve thisgoal by leveraging self-supervised training to disentangleand re-organize the source image and the exemplar . How-ever , the naive approach will cause obvious fusing artifacts.We carefully analyze it and propose a content bottleneckand strong augmentations to avoid the trivial solution of di-rectly copying and pasting the exemplar image. Meanwhile,to ensure the controllability of the editing process, we de-
sign an arbitrary shape mask for the exemplar image and
leverage the classifier-free guidance to increase the similar-ity to the exemplar image. The whole framework involvesa single forward of the diffusion model without any itera-tive optimization. We demonstrate that our method achieves
an impressive performance and enables controllable edit-
ing on in-the-wild images with high fidelity. The code and
*Work is done during the internship at Microsoft Research Asia.
†Corresponding author.pretrained models are available at https://github.
com/Fantasy-Studio/Paint-by-Example .
| 1. Introduction
Creative editing for photos has become a ubiquitous need
due to the advances in a plethora of social media plat-forms. AI-based techniques [ 37] significantly lower the
barrier of fancy image editing that traditionally requiresspecialized software and labor-intensive manual operations.Deep neural networks can now produce compelling re-sults for various low-level image editing tasks, such as im-age inpainting [ 61,68], composition [ 41,67,72], coloriza-
tion [ 53,71,74] and aesthetic enhancement [ 7,11], by learn-
ing from richly available paired data. A more challeng-ing scenario, on the other hand, is semantic image editing,which intends to manipulate the high-level semantics of im-age content while preserving image realism. Tremendousefforts [ 2,5,36,49,55,57] have been made along this way,
mostly relying on the semantic latent space of generativemodels, e.g., GANs [ 16,28,70], yet the majority of existing
works are limited to specific image genres.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18381
Recent large-scale language-image (LLI) models, based
on either auto-regressive models [ 12,69] or diffusion mod-
els [ 19,45,50,54], have shown unprecedented generative
power in modeling complex images. These models en-able various image manipulation tasks [ 3,22,30,52] previ-
ously unassailable, allowing image editing for general im-ages with the guidance of text prompt. However, even thedetailed textual description inevitably introduces ambiguity
and may not accurately reflect the user-desired effects; in-deed, many fine-grained object appearances can hardly bespecified by the plain language. Hence, it is crucial to de-
velop a more intuitive approach to ease fine-grained image
editing for novices and non-native speakers.
In this work, we propose an exemplar-based image edit-
ingapproach that allows accurate semantic manipulation on
the image content according to an exemplar image provided
by users or retrieved from the database. As the saying goes,“a picture is worth a thousand words”. We believe imagesbetter convey the user’s desired image customization in amore granular manner than words. This task is completelydifferent from image harmonization [ 20,64] that mainly fo-
cuses on color and lighting correction when compositing the
foreground objects, whereas we aim for a much more com-plex job: semantically transforming the exemplar, e.g., pro-
ducing a varied pose, deformation or viewpoint, such thatthe edited content can be seamlessly implanted accordingto the image context. In fact, ours automates the traditional
image editing workflow where artists perform tedious trans-formations upon image assets for coherent image blending.
To achieve our goal, we train a diffusion model [ 24,59]
conditioned on the exemplar image. Different from text-
guided models, the core challenge is that it is infeasibleto collect enough triplet training pairs comprising sourceimage, exemplar and corresponding editing ground truth.
One workaround is to randomly crop the objects from theinput image, which serves as the reference when training
the inpainting model. The model trained from such a self-
reference setting, however, cannot generalize to real exem-
plars, since the model simply learns to copy and paste thereference object into the final output. We identify severalkey factors that circumvent this issue. The first is to utilizea generative prior. Specifically, a pretrained text-to-image
model has the ability to generate high-quality desired re-
sults, we leverage it as initialization to avoid falling intothe copy-and-paste trivial solution. However, a long timeof finetuning may still cause the model to deviate from theprior knowledge and ultimately degenerate again. Hence,we introduce the content bottleneck for self-reference con-ditioning in which we drop the spatial tokens and only re-gard the global image embedding as the condition. In thisway, we enforce the network to understand the high-levelsemantics of the exemplar image and the context from thesource image, thus preventing trivial results during the self-supervised training. Moreover, we apply aggressive aug-
mentation on the self-reference image which can effectivelyreduce the training-test gap.
We further improve the editability of our approach in
two aspects. One is that our training uses irregular randommasks so as to mimic the casual user brush used in practi-cal editing. We also prove that classifier-free guidance [ 25]
is beneficial to boost both the image quality and the style
resemblance to the reference.
The proposed method, Paint by Example , well solves
semantic image composition problem where the reference
is semantically transformed and harmonized before blend-
ing into another image, as shown in Figure 1. Our method
shows a significant quality advantage over prior works in asimilar setting. Notably, our editing just involves a singleforward of the diffusion model without any image-specificoptimization, which is a necessity for many real applica-tions. To summarize, our contributions are as follows:
• We propose a new image editing approach, Paint by Ex-
ample , which semantically alters the image content based
on an exemplar image. This approach offers fine-grainedcontrol while being convenient to use.
• We solve the problem with an image-conditioned diffu-
sion model trained in a self-supervised manner. We pro-pose a group of techniques to tackle the degenerate chal-lenge.
• Our approach performs favorably over prior arts for in-
the-wild image editing, as measured by both quantitativemetrics and subjective evaluation.
|
Zhang_Lookahead_Diffusion_Probabilistic_Models_for_Refining_Mean_Estimation_CVPR_2023 | Abstract
We propose lookahead diffusion probabilistic models
(LA-DPMs) to exploit the correlation in the outputs of the
deep neural networks (DNNs) over subsequent timesteps in
diffusion probabilistic models (DPMs) to refine the mean
estimation of the conditional Gaussian distributions in the
backward process. A typical DPM first obtains an estimate
of the original data sample xby feeding the most recent
state ziand index iinto the DNN model and then computes
the mean vector of the conditional Gaussian distribution for
zi 1. We propose to calculate a more accurate estimate for
xby performing extrapolation on the two estimates of x
that are obtained by feeding (zi+1,i+1) and(zi,i)into the
DNN model. The extrapolation can be easily integrated into
the backward process of existing DPMs by introducing an
additional connection over two consecutive timesteps, and
fine-tuning is not required. Extensive experiments showed
that plugging in the additional connection into DDPM,
DDIM, DEIS, S-PNDM, and high-order DPM-Solvers leads
to a significant performance gain in terms of Fr ´echet in-
ception distance (FID) score. Our implementation is avail-
able at https://github.com/guoqiang-zhang-
x/LA-DPM .
| 1. Introduction
As one type of generative model, diffusion probabilis-
tic models (DPMs) have made significant progress in recent
years. The pioneering work [ 17] applied non-equilibrium
statistical physics to estimating probabilistic data distribu-
tions. In doing so, a Markov forward diffusion process is
constructed by systematically inserting additive noise in the
data until essentially only noise remains. The data distribu-
tion is then gradually restored by a reverse diffusion pro-
cess starting from a simple parametric distribution. The
main advantage of DPMs over classic tractable models (e.g.,
HMMs, GMMs, see [ 5]) is that they can accurately modelboth the high and low likelihood regions of the data distribu-
tion via the progressive estimation of noise-perturbed data
distributions. In comparison to generative adversarial net-
works (GANs) [ 1,8,9], DPMs exhibit more stable training
dynamics by avoiding adversarial learning.
The work [ 10] focuses on a particular type of DPM,
namely a denoising diffusion probabilistic model (DDPM),
and shows that after a sufficient number of timesteps (or
equivalently iterations) in the backward process, DDPM
can achieve state-of-the-art performance in image genera-
tion tasks by the proper design of a weighted variational
bound (VB). In addition, by inspection of the weighted VB,
it is found that the method score matching with Langevin dy-
namics (SMLD) [ 19,20] can also be viewed as a DPM. The
recent work [ 21] interprets DDPM and SMLD as search of
approximate solutions to stochastic differential equations.
See also [ 15] and [ 7] for improved DPMs that lead to better
log-likelihood scores and sampling qualities, respectively.
One inconvenience of a standard DPM is that the asso-
ciated deep neural network (DNN) needs to run for a suf-
ficient number of timesteps to achieve high sampling qual-
ity while the generative model of a GAN only needs to run
once. This has led to an increasing research focus on re-
ducing the number of reverse timesteps in DPMs while re-
taining a satisfactory sampling quality (see [ 22] for a de-
tailed overview). Song et al. proposed the so-called de-
noising diffusion implicit model (DDIM) [ 18] as an exten-
sion of DDPM from a non-Markov forward process point of
view. The work [ 11] proposed to learn a denoising schedule
in the reverse process by explicitly modeling the signal-to-
noise ratio in the image generation task. [ 6] and [ 12] consid-
ered effective audio generation by proposing different noise
scheduling schemes in DPMs. Differently from the above
methods, the recent works [ 4] and [ 3] proposed to estimate
the optimal variance of the backward conditional Gaussian
distribution to improve sampling qualities for both small
and large numbers of timesteps.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1421
Another approach for improving the sampling quality
of DPMs with a limited computational budget is to exploit
high-order methods for solving the backward ordinary dif-
ferential equations (ODEs) (see [ 21]). The authors of [ 13]
proposed pseudo numerical methods for diffusion models
(PNDM), of which high-order polynomials of the estimated
Gaussian noises {ˆ✏✓(zi+j,i+j)|r j 0}are intro-
duced to better estimate the latent variable zi 1at iteration
i, where ˆ✏✓represents a pre-trained neural network model
for predicting the Gaussian noises. The work [ 23] further
extends [ 13] by refining the coefficients of the high-order
polynomials of the estimated Gaussian noises, and proposes
the diffusion exponential integrator sampler (DEIS). Re-
cently, the authors of [ 14] considered solving the ODEs of a
diffusion model differently from [ 23]. In particular, a high-
order Taylor expansion of the estimated Gaussian noises
was employed to better approximate the continuous solu-
tions of the ODEs, where the developed sampling methods
are referred to as DPM-Solvers.
We note that the computation of zi 1at timestep iin
the backward process of existing DPMs (including the high-
order ODE solvers) can always be reformulated in terms of
an estimate ˆxfor the original data sample xin combination
with other terms. In principle, as the timestep idecreases,
the estimate ˆxwould become increasingly accurate. In this
paper, we aim to improve the estimation accuracy of xat
each timestep iin computation of the mean vector for the la-
tent variable zi 1. To do so, we propose to make an extrap-
olation from the two most recent estimates of xobtained at
timestep iandi+1. The extrapolation allows the backward
process to look ahead towards a noisy direction targeting
x, thus improving the estimation accuracy. The extrapola-
tion can be realized by simply introducing additional con-
nections between two consecutive timesteps, which can be
easily plugged into existing DPMs with negligible computa-
tional overhead. We refer to the improved diffusion models
as Lookahead-DPMs (LA-DPMs).
We conducted an extensive evaluation by plugging in the
additional connection into the backward process of DDPM,
DDIM, DEIS, S-PNDM, and DPM-Solver. Interestingly, it
is found that the performance gain of LA-DPMs is more
significant for a small number of timesteps. This makes it
attractive for practical applications as it is computationally
preferable to run the backward process in a limited number
of timesteps.
|
Zhang_MD-VQA_Multi-Dimensional_Quality_Assessment_for_UGC_Live_Videos_CVPR_2023 | Abstract
User-generated content (UGC) live videos are often
bothered by various distortions during capture procedures
and thus exhibit diverse visual qualities. Such source videos
are further compressed and transcoded by media server
providers before being distributed to end-users. Because
of the flourishing of UGC live videos, effective video qual-
ity assessment (VQA) tools are needed to monitor and per-
ceptually optimize live streaming videos in the distributing
process. In this paper, we address UGC Live VQA prob-
lems by constructing a first-of-a-kind subjective UGC Live
VQA database and developing an effective evaluation tool.
Concretely, 418 source UGC videos are collected in real
live streaming scenarios and 3,762 compressed ones at dif-
ferent bit rates are generated for the subsequent subjective
VQA experiments. Based on the built database, we de-
velop a M ulti-D imensional VQA (MD-VQA) evaluator to
measure the visual quality of UGC live videos from seman-
tic, distortion, and motion aspects respectively. Extensive
experimental results show that MD-VQA achieves state-of-
the-art performance on both our UGC Live VQA database
and existing compressed UGC VQA databases.
| 1. Introduction
With the rapid development of social media applications
and the advancement of video shooting and processing tech-
nologies, more and more ordinary people are willing to tell
their stories, share their experiences, and have their voice
heard on social media or streaming media platforms such
as Twitch, Tiktok, Taobao, etc. However, due to the lack
of photography skills and professional equipment, the vi-
*These authors contributed equally to this work. The database is avail-
able at https://tianchi.aliyun.com/dataset/148818?t=1679581936815.
†Corresponding author.
Users
UGC Distortion
UploadDistribute
Live Platforms
Viewers
Transcoding
Quality Monitoring
OptimizationFigure 1. The distributing process of UGC live videos, where the
users upload the videos degraded by the UGC distortions to the
live platforms and the distorted videos are further compressed be-
fore being distributed to the viewers. The VQA models can mon-
itor the quality changes of the compressed UGC live videos and
adaptively optimize the transcoding setting.
sual quality of user-generated content (UGC) videos may
be degraded by in-the-wild distortions [51]. What’s more,
in common live platforms, live videos are encoded and dis-
tributed to end-users with very low delay, where compres-
sion algorithms have a significant influence on the visual
quality of live videos because they can greatly reduce trans-
mission bandwidth. As illustrated in Fig. 1, video quality
assessment (VQA) tools play an important role in monitor-
ing, optimizing, and further improving the Quality of Expe-
rience (QoE) of end-users in UGC live streaming systems.
In the literature, many UGC VQA databases have been
carried out [14, 21,36,45,51] to address the impact of gen-
eral in-the-wild distortions on video quality, while some
compression VQA databases [22, 34,40] are proposed to
study the influence of compression artifacts. Then some
compressed UGC VQA databases [1, 21,46] are further
constructed to solve the problem of assessing the quality
of UGC videos with compression distortions. However,
they are either small in scale or employ high-quality UGC
videos as the sources, and all of the mentioned databases
lack videos in live streaming scenes. Therefore, there is
a lack of a proper UGC Live VQA database to develop
and validate the video quality measurement tools for live
streaming systems.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1746
Table 1. Review of common VQA databases, where ’UGC+Compression’ refers to manually encoding the UGC videos with different
compression settings.
Database Year Duration/s Ref. Num. Scale Scope Subjective Evaluating Format
CVD2014 [31] 2014 10-25 - 234 In-capture In-lab
LIVE-Qualcomm [12] 2016 15 - 208 In-capture In-lab
KoNViD-1k [14] 2017 8 - 1,200 In-the-wild Crowdsourced
LIVE-VQC [36] 2018 10 - 585 In-the-wild Crowdsourced
YouTube-UGC [45] 2019 20 - 1,500 In-the-wild Crowdsourced
LSVQ [51] 2021 5-12 - 39,075 In-the-wild Crowdsourced
UGC-VIDEO [21] 2019 >10 50 550 UGC + Compression In-lab
LIVE-WC [53] 2020 10 55 275 UGC + Compression In-lab
YT-UGC+(Subset) [46] 2021 20 189 567 UGC + Compression In-lab
ICME2021 [1] 2021 - 1,000 8,000 UGC + Compression In-lab
TaoLive(proposed) 2022 8 418 3,762 UGC + Compression In-lab
To address UGC Live VQA problems, we first con-
struct a large-scale database named TaoLive , consisting 418
source UGC videos from the TaoBao [2] live streaming
platform and the corresponding 3,762 compressed videos at
various bit rates. Then we perform a subjective experiment
in a well-controlled environment. Afterward, we propose
a no-reference (NR) M ulti-D imensional VQA (MD-VQA )
model to measure the visual quality of UGC live videos
in terms of semantic, distortion, and motion aspects. The
semantic features are extracted by pretrained convolutional
neural network (CNN) model; the distortion features are ex-
tracted by specific handcrafted image distortion descriptors
(i.e. blur, noise, block effect, exposure, and colorfulness);
and the motion features are extracted from video clips by
pretrained 3D-CNN models. Compared with existing UGC
VQA algorithms, MD-VQA measures visual quality from
multiple dimensions, and these dimensions correspond to
key factors affecting live video quality, which thereby has
better interpretability and performance. The contributions
of this paper are summarized as below:
•We build a large-scale UGC Live VQA database
targeted at the compression artifacts on the UGC
live videos. We collect 418 raw UGC live videos that
are diverse in content, distortion, and quality. Then 8
encoding settings are used, which provides 3,762 com-
pressed UGC live videos in total.
•We carry out a well-controlled in-lab subjective ex-
periment. 44 participants are invited to participate in
the subjective experiment and a total of 165,528 sub-
jective annotations are gathered.
•A multi-dimensional NR-VQA model is proposed,
using pretrained 2D-CNN, handcrafted distortion de-
scriptors, and pretrained 3D-CNN for the semantic,
distortion, and motion features extraction respectively.
The extracted features are then spatio-temporally fused
to obtain the video-level quality score. The extensive
experimental results validate the effectiveness of the
proposed method.
$YJ
6WG$YJ
6WG(a)
/uni00000015/uni00000011/uni00000018 /uni00000016/uni00000011/uni00000013 /uni00000016/uni00000011/uni00000018 /uni00000017/uni00000011/uni00000013 /uni00000017/uni00000011/uni00000018 /uni00000018/uni00000011/uni00000013
MOS/uni00000013/uni00000011/uni00000013/uni00000013/uni00000013/uni00000011/uni00000013/uni00000018/uni00000013/uni00000011/uni00000014/uni00000013/uni00000013/uni00000011/uni00000014/uni00000018/uni00000013/uni00000011/uni00000015/uni00000013/uni00000013/uni00000011/uni00000015/uni00000018/uni00000013/uni00000011/uni00000016/uni00000013Probability/uni00000037/uni00000044/uni00000052/uni0000002f/uni0000004c/uni00000059/uni00000048/uni00000042/uni00000035/uni00000048/uni00000049
/uni0000002c/uni00000026/uni00000030/uni00000028/uni00000015/uni00000013/uni00000015/uni00000014/uni00000042/uni00000035/uni00000048/uni00000049 (b)
Figure 2. Comparison of the quality distribution of reference
videos between the TaoLive and ICME2021 [1] databases. The
source UGC videos in the ICME2021 database are centered on
high-quality levels while the quality levels of the source UGC
videos in the TaoLive database are more diverse.
|
Zhang_Weakly_Supervised_Segmentation_With_Point_Annotations_for_Histopathology_Images_via_CVPR_2023 | Abstract
Image segmentation is a fundamental task in the field of
imaging and vision. Supervised deep learning for segmen-
tation has achieved unparalleled success when sufficient
training data with annotated labels are available. However,
annotation is known to be expensive to obtain, especially
for histopathology images where the target regions are usu-
ally with high morphology variations and irregular shapes.
Thus, weakly supervised learning with sparse annotations
of points is promising to reduce the annotation workload. In
this work, we propose a contrast-based variational model
to generate segmentation results, which serve as reliable
complementary supervision to train a deep segmentation
model for histopathology images. The proposed method
considers the common characteristics of target regions in
histopathology images and can be trained in an end-to-end
manner. It can generate more regionally consistent and
smoother boundary segmentation, and is more robust to un-
labeled ‘novel’ regions. Experiments on two different his-
tology datasets demonstrate its effectiveness and efficiency
in comparison to previous models. Code is available at:
https://github.com/hrzhang1123/CVM_WS_
Segmentation .
| 1. Introduction
Histopathology images are of great importance for clini-
cal diagnosis and prognosis of diseases. With the thriving of
artificial intelligence (AI) techniques over the past decade,
especially deep learning, automatic analysis of histopathol-
ogy images in some tasks has achieved comparable or even
†: Equal contribution; ∗: Corresponding author
Figure 1. Two examples of histopathology images with target re-
gions (e.g. tumor (top row) and stroma (bottom row)) annotated
by contours (b) or in-target and out-of-target points (c).
surpassing performance in comparison with human pathol-
ogists’ reviewing [2, 13, 28, 50]. However, most competent
methods are based on supervised learning, and their perfor-
mances critically rely on a large number of training samples
with detailed annotations. Yet such annotations usually re-
quire experienced pathologists and are expensive (in terms
of cost and time consumption) to obtain, and also subject
to human errors. The annotation problem for histopathol-
ogy images is particularly demanding, not only due to the
large size of such an image but also resulting from irregular
shapes of target tissues to be annotated (See Figure.1).
Weakly supervised learning is a promising solution to
alleviate the issues of obtaining annotations. The anno-
tations for the “weakly supervised” can specifically re-
fer to image-level labelling (multiple instance learning)
[23, 25, 27, 41, 48, 51], partial annotations within an image
(point or scribble annotation) [26], or full annotation in par-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15630
tial images (semi-supervised learning) [31]. Amongst these
three categories, learning by partial annotations has excel-
lent target localization capability, yet requires comparably
less cost to annotate.
Interactive segmentation with scribbles or points has
been widely studied for a few decades [4]. Conventional
methods relied on user interactive input for object segmen-
tation, such as grab-cut [39], Graph-cut [4], active contour-
based selective segmentation [37], random walker [17]. In
recent years it has been a hot topic to develop segmentation
models that can be trained by utilizing only the scribble or
point annotations, formulated as partially-supervised learn-
ing problem [8, 12, 22, 24, 26, 34, 36, 46, 49, 52]. Yet, exist-
ing partially-supervised methods were designed mainly for
natural images or medical images with relatively regular-
shaped objects and consistent textures, and very few are di-
rectly applicable to histopathology images given the above
challenges.
In this work, we focus on partially-supervised learning
for histopathology image segmentation based on in-target
and out-of-target point annotations, where in-target points
refer to those labelled inside the target regions and out-
of-target points refer to the outside ones. Histopathology
images are significantly distinguished from other types of
images. In many histopathology images, the target objects
present distinct regional characteristics. Specifically, as the
example shown in Figure.1, the tumors usually cluster in-
side large regions, in which morphological features or tex-
tures are similar and visually different to the outside re-
gions, and there exist comparably clear boundaries. Ex-
isting works on partially-supervised learning do not well
utilize this characteristic. Moreover, a histopathology im-
age scanned from a tissue section often contains some non-
target regions that are visually or morphologically unique to
those in other images. If such regions are not labelled, they
will be ‘novel’ to a trained machine learning model, and the
predicted categories of them will depend on their similarity
to the neutral tissue and to the target tissue. If such novel re-
gions are more similar to target tissues, they will be wrongly
predicted as the target category, leading to false detections.
Existing methods are limited in tackling this situation, es-
pecially for those methods based on consistency training on
data augmentation [15, 30], as consistency supervision may
amplify such errors.
Based on the above observations and insights, we pro-
pose to adopt a variational segmentation model to provide
complementary supervision information to assist the train-
ing of a deep segmentation model. This variational seg-
mentation model itself will be guided by annotated in-target
and out-of-target points for the segmentation of target re-
gions in the images. Variational methods are powerful tools
for segmenting structures in an image. Often posed as an
optimisation problem, energy functionals can be carefullyconstructed to satisfy certain desired properties, for exam-
ple, maintaining consistency inside the evolving contour, or
constraining the length of the boundary to ensure a smooth
boundary [11, 40].
The uniqueness of variational methods highly fits the
characteristic of target regions in histopathology images, as
mentioned above. However, existing variational methods
cannot be directly applied to high-dimensional deep fea-
tures, which contain higher-level semantic information. To
tackle this problem, we introduce the concept of contrast
maps derived from deep feature correlation maps and for-
mulate a variational model applied to the obtained contrast
maps. Specifically, a set of correlation maps are generated
based on the annotated points on an image. The correspond-
ing contrast maps can then be obtained by the pairs of corre-
lation maps of in-target and out-of-target points through the
subtraction operation. A variational formulation is used to
aggregate the obtained contrast maps for the final segmen-
tation result. Finally, the variational segmentation provides
complementary supervision to the deep segmentation model
through the uncertainty-aware Kullback–Leibler (KL) di-
vergence. Besides, the proposed model can alleviate the
aforementioned issue of unlabeled novel tissue regions, re-
sulting from the subtraction operation in obtaining the con-
trast maps.
In summary, the main contribution of this work is the for-
mulated contrast-based variational model, used as reliable
complementary supervision for training a deep segmenta-
tion model from weak point annotations for histopathology
images. The variational model is based on the proposed new
contrast maps, which incorporate the correlations between
each location in an image and the annotated in-target and
out-of-target points. The proposed model is well suited for
the segmentation of histopathology images and is robust to
unlabeled novel regions. The effectiveness and efficiency of
the proposed method are empirically proven by the experi-
mental results on two different histology datasets.
|
Yang_DeCo_Decomposition_and_Reconstruction_for_Compositional_Temporal_Grounding_via_Coarse-To-Fine_CVPR_2023 | Abstract
Understanding dense action in videos is a fundamen-
tal challenge towards the generalization of vision models.
Several works show that compositionality is key to achiev-
ing generalization by combining known primitive elements,
especially for handling novel composited structures. Com-
positional temporal grounding is the task of localizing dense
action by using known words combined in novel ways in the
form of novel query sentences for the actual grounding. In
recent works, composition is assumed to be learned from
pairs of whole videos and language embeddings through
large scale self-supervised pre-training. Alternatively, one
can process the video and language into word-level prim-
itive elements, and then only learn fine-grained semantic
correspondences. Both approaches do not consider the gran-
ularity of the compositions, where different query granularity
corresponds to different video segments. Therefore, a good
compositional representation should be sensitive to different
video and query granularity. We propose a method to learn a
coarse-to-fine compositional representation by decomposing
the original query sentence into different granular levels,
and then learning the correct correspondences between the
video and recombined queries through a contrastive rank-
ing constraint. Additionally, we run temporal boundary
prediction in a coarse-to-fine manner for precise ground-
ing boundary detection. Experiments are performed on two
datasets, Charades-CG and ActivityNet-CG, showing the
superior compositional generalizability of our approach.
| 1. Introduction
Over the last years, robust results have been shown for
the detection of predefined, simpler action classes in video
[11, 38, 41, 48]. On the other hand, the detection of dense ac-
tion, e.g. action contents described by a rich description, still
poses a significant challenge due to the large diversity of pos-
sible language descriptions and semantic associations [16].
* Work done while Lijin Yang was an Intern at Woven by Toyota.Recent work in this area treats complex actions as monolithic
events via end-to-end predictions [27, 44]. They produce
a single label to densely describe a long video sequence,
and they are difficult to scale up to more varied patterns.
Fundamentally, most complex actions consist of a series of
simpler events, and thus a dense action can be treated as a
composition of known event primitives [13, 25, 26]. Such a
compositional representation can allow for a higher degree
of model generalization. By learning from a finite number of
action composites, and recombining their constituent event
primitives in novel ways for unseen action descriptions, the
representation can expand to large numbers of novel sce-
narios that have not been observed in the original action
space. In this case, the action description is not treated as a
single label but as a language modality that allows to learn
finer-grained video and language correspondence.
The task of temporal video grounding concentrates on the
described setting. Given a video and an action-related query
sentence, the model has to output the start and end times
of the specific moment that semantically corresponds to the
given query sentence.
On top of temporal grounding, Compositional Temporal
Grounding is a new task for testing whether the model can
generalize to sentences that contain novel compositions of
seen words. Several recent methods [12, 47, 50] encode both
sentence and video segments into unstructured global repre-
sentations and then devise specific cross-modal interaction
modules to fuse them for final prediction. Unfortunately,
such global representations fail to explicitly model video
structure and language compositions, and fail for cases where
higher granularity is required. Similar to the above approach
of correspondence learning via global information, compos-
ite representations learn from paired monolithic video and
language respectively, through large-scale self-supervised
pre-training [32]. In either case, the main challenge of utiliz-
ing datasets of limited action size to achieve generalization
towards novel actions remains challenging due to its combi-
natorial complexity.
Differently to the global representation approaches,
VISA [17] introduces a compositional method toward the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23130
“News reporter in blue are giving news about people cutting dogs’ hair and trimming dogs’ nails.”48.8s10.7sTemporal boundary
“people cuts dogs’ hair”“people trimsdogs’ hair”???“reporter in blue”𝑊!𝑊"𝑊#[Verb][Noun]Learnable promptQuery A = selected global sentenceQuery B = different global sentence from A Sub-sentences (primitive events)Query AQuery BQuery A“News reporter in blue are giving news about people cutting dogs’ hair and trimming dogs’ nails.”Masking
Positive anchor ANegative AB[Verb] from A, [Noun] from BNegative B[Verb] from B, [Noun] from B[Verb] from A, [Noun] from AReconstructionlogitsQueryA >positive A>negative AB>negative BFigure 1. Our proposed method of decomposition and reconstruction of masked words with a coarse-to-fine scheme.
temporal grounding task. They parse video and language
into several primitive elements, and then learn fine-grained
semantic correspondences. Specifically, the composite rep-
resentation is extracted from a graph that is structured with
primitive elements, and these elements serve as a common
representation for vision-language correspondence learning.
However, both the global and compositional approaches dis-
regard the granularity of the action composition during the
learning phase, and can have issues when generalizing to
more varied action spaces.
Based on these considerations, we argue that a good com-
posite representation should be sensitive to different levels
of granularity of both the action and the query language.
We provide an overview of our idea in Fig. 1. Concretely,
we propose to learn a composite representation in a coarse-
to-fine manner, where we first decompose the whole query
sentence into several simple phrases as subsentences, and
then learn correspondences not only globally across query
and video, but also between the subsentences and their re-
lated primitive sequences from the video. Since there is no
ground truth to relate subsentences and their corresponding
sequences, we propose to learn correspondence by decom-
posing and reconstructing them under weak supervision. For
each subsentence as an anchor sample, we generate a tem-
poral proposal and learn the positive representation through
the mining of negative samples.
To structure the overall weak supervision, we mask the
words in a given anchor query, and use the embedding of the
positive and negative query with the related pseudo temporal
segments from the video respectively to do the reconstruc-
tion for the masked words. The negative sample is another
subsentence but includes the words from the other query
action description, to allow the model sensitive to the word
level variation for composition. According to the fact that
the negative subsentence contains the novel word compared
to the positive anchor sub-sentence, we could rank the recon-struction quality according to the given query sub-sentence
as a natural prior constraint during the training.
Additionally, the ground truth that links the temporal
boundary to the global sentence provides supervision for the
coarse compositional representation. The generated subsen-
tences from the same global query are recombined into a
new sentence that should be as informative as the original
global query, and then this recombined sentence is used with
the original query sentence to estimate the temporal bound-
ary via supervised learning. The whole process is trained
end-to-end.
Our contributions are summarized below:
(i). We argue that a good compositional representation
should be sensitive to action granularity of video and query
language, and propose a coarse-to-fine decomposition ap-
proach to this end.
(ii). To learn a compositional representation without
composite-level labels, we decompose events from global
queries and learn primitive-level correspondences between
video and language with a weakly-supervised contrastive
ranking in form of word-masked reconstruction.
(iii). The decomposed events are recombined to a novel
query along with the original query for temporal boundary
estimation supervised, jointly learned end-to-end way with a
coarse-to-fine structure.
(iv). Experiments on Charades-CG and ActivityNet-
CG [17] demonstrate our method significantly outperforms
existing methods about compositional temporal grounding
tasks.
|
Yu_PanelNet_Understanding_360_Indoor_Environment_via_Panel_Representation_CVPR_2023 | Abstract
Indoor 360 panoramas have two essential properties. (1)
The panoramas are continuous and seamless in the hori-
zontal direction. (2) Gravity plays an important role in in-
door environment design. By leveraging these properties, we
present PanelNet, a framework that understands indoor envi-
ronments using a novel panel representation of 360 images.
We represent an equirectangular projection (ERP) as consec-
utive vertical panels with corresponding 3D panel geometry.
To reduce the negative impact of panoramic distortion, we in-
corporate a panel geometry embedding network that encodes
both the local and global geometric features of a panel. To
capture the geometric context in room design, we introduce
Local2Global Transformer, which aggregates local informa-
tion within a panel and panel-wise global context. It greatly
improves the model performance with low training overhead.
Our method outperforms existing methods on indoor 360
depth estimation and shows competitive results against state-
of-the-art approaches on the task of indoor layout estimation
and semantic segmentation.
| 1. Introduction
Understanding indoor environments is an important topic
in computer vision as it is crucial for multiple practical appli-
cations such as room reconstruction, robot navigation, and
virtual reality applications. Early methods focus on mod-
eling indoor scenes using perspective images [9, 10, 19].
With the development of CNNs and omnidirectional pho-
tography, many works turn to understand indoor scenes us-
ing panorama images. Compared to the perspective images,
panorama images have a larger field-of-view (FoV) [43] and
provide the geometric context of the indoor environment in
a continuous way [24].
There are several 360 input formats used in indoor scene
understanding. One of the most commonly-used formats is
the equirectangular projection (ERP). Modeling the holistic
scene from an ERP is challenging. The ERP distortion in-
creases when pixels are close to the zenith or nadir of the
image, which may decrease the power of the convolutional
PanelNet
Indoor
Scene UnderstandingPanel
RepresentationSemanticDepth
LayoutFigure 1. An overview of the proposed system. We present Panel-
Net, a network that learns the indoor environment using a novel
panel representation of ERP. We formulate the panel representation
as consecutive ERP panels with corresponding global and local
geometry. By slightly modifying the network structure, PanelNet is
capable of tackling major 360 indoor understanding tasks such as
depth estimation, semantic segmentation and layout prediction.
structures designed for distortion-free perspective images.
To eliminate the negative effects of ERP distortion, recent
works [8, 21, 28] focus on decomposing the whole panorama
into perspective patches, i.e., tangent images. However, par-
titioning a panorama into discontinuous patches breaks the
local continuity of gravity-aligned scenes and objects which
limits the performance of these works. To reduce the im-
pact of distortion while preserving the local continuity, we
present PanelNet, a novel network to understand the indoor
scene from equirectangular projection.
We design our PanelNet based on two essential properties
of equirectangular projection. (1) The ERP is continuous
and seamless in the horizontal direction. (2) Gravity plays an
important role in indoor environment design, which makes
the gravity-aligned features crucial for indoor 360 under-
standing [24, 31]. Following these two properties, we tackle
the challenges above through a novel panel representation of
ERP. We represent an ERP as consecutive panels with cor-
responding global and local 3D geometry, which preserves
the gravity-aligned features within a panel and maintains
the global continuity of the indoor structure across panels.
Inspired by Omnifusion [21], we design a geometry embed-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
878
ding network for panel representations that encodes both
local and global features of panels to reduce the negative
effects of ERP distortion without adding further explicit dis-
tortion fixing modules. We further introduce Local2Global
Transformer as a feature processor. Considering the nature
of panel representation, we design this Transformer with
Window Blocks for local information aggregation and Panel
Blocks for panel-wise context capturing. The main contribu-
tions of our work are:
•We represent the ERP as consecutive vertical panels
with corresponding 3D geometry. We introduce Panel-
Net, a novel indoor panorama understanding pipeline
using the panel representation. Following the essential
geometric properties of the indoor equirectangular pro-
jection, our framework outperforms existing methods
on the task of indoor 360 depth estimation and shows
competitive results on other indoor scene understand-
ing tasks such as semantic segmentation and layout
prediction.
•We propose a panel geometry embedding network that
encodes both local and global geometric features of pan-
els and reduces the negative impact of ERP distortion
implicitly while preserving the geometric continuity.
•We design Local2Global Transformer as a feature pro-
cessor, which greatly enhances the continuity of geo-
metric features and improves the model performance by
successfully aggregating the local information within a
panel and capturing panel-wise context accurately.
|
Yu_Fusing_Pre-Trained_Language_Models_With_Multimodal_Prompts_Through_Reinforcement_Learning_CVPR_2023 | Abstract
Language models are capable of commonsense reason-
ing: while domain-specific models can learn from ex-
plicit knowledge ( e.g. commonsense graphs [6], ethical
norms [25]), and larger models like GPT-3 [7] mani-
fest broad commonsense reasoning capacity. Can their
knowledge be extended to multimodal inputs such as im-
ages and audio without paired domain data? In this
work, we propose
ESPER (Extending Sensory PErception
with Reinforcement learning) which enables text-only pre-
trained models to address multimodal tasks such as visual
commonsense reasoning. Our key novelty is to use rein-
forcement learning to align multimodal inputs to language
model generations without direct supervision: for example,
our reward optimization relies only on cosine similarity de-
rived from CLIP [52] and requires no additional paired
(image, text) data. Experiments demonstrate that ESPER
outperforms baselines and prior work on a variety of mul-
timodal text generation tasks ranging from captioning to
commonsense reasoning; these include a new benchmark
we collect and release, the ESP dataset, which tasks mod-
els with generating the text of several different domains for
each image. Our code and data are publicly released at
https://github.com/JiwanChung/esper .
| 1. Introduction
Collecting multimodal training data for new domains can
be a Herculean task. Not only is it costly to assemble mul-
timodal data, but also curated datasets cannot completely
cover a broad range of skills, knowledge, and form ( e.g.
free text, triplets, graphs, etc.). Ideally, we want to endow
*denotes equal contribution
ESPERGenerateNews :Dialog :Story :
Prompt
PLM
CLIPA: Is the cat sleeping? B: YesDogs with vegan diets are healthier, study suggestsI was walking to the veterinarian. My dog was having a bad day.0.770.410.69CLIPRewardFigure 1. The intuition of ESPER ,Extending Sensory PErception
with Reinforcement learning. To better align knowledge in CLIP
and pretrained language models (PLM), we use CLIP as a reward
for the pairs of images and self-generated text.
multimodal models with diverse reasoning capacity ( e.g.
ethics [25], commonsense [57], etc.) without undertaking
a separate multimodal annotation effort each time.
In this work, we propose Extending Sensory PErception
with Reinforcement learning( ESPER ), a new framework that
enables a pre-trained language model to accept multimodal
inputs like images and audio. ESPER extends diverse skills
embodied by the pre-trained language model to similarly
diverse multimodal capabilities, all without requiring ad-
ditional visually paired data. In a zero-shot fashion, our
model generates text conditioned on an image: using this
interface, we show that ESPER is capable of a diverse range
of skills, including visual commonsense [50], news [39], di-
alogues [60], blog-style posts [28], and stories [22].
ESPER combines insights from two previously disjoint
lines of work: multimodal prompt tuning and reinforce-
ment learning . Like prior multimodal prompt tuning work,
ESPER starts from a base language-only model (e.g., GPT-
2 [53], COMET [6]), keeps most of its parameters frozen,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10845
and trains a small number of encoder parameters to map
visual features into the embedding space of the language
model [40, 45, 68]. Unlike prior works, however, ESPER
does not train these parameters using maximum likelihood
estimation over a dataset of aligned (image, caption) pairs.
Instead, it uses a reinforcement learning objective. During
training, the model is first queried for completions condi-
tioned on visual features. Then, the lightweight vision-to-
text encoder is updated using proximal policy optimization
(PPO) [59] to maximize a similarity score computed with a
secondary pre-trained image-caption model, CLIP [52]. As
a result, the frozen language model can interpret the mul-
timodal inputs in the same context as the text embedding
space without additional human-annotated paired data.
Reinforcement learning has two advantages over maxi-
mum likelihood objectives. |
Yu_Rotation-Invariant_Transformer_for_Point_Cloud_Matching_CVPR_2023 | Abstract
The intrinsic rotation invariance lies at the core of
matching point clouds with handcrafted descriptors. How-ever , it is widely despised by recent deep matchers thatobtain the rotation invariance extrinsically via data aug-mentation. As the finite number of augmented rotationscan never span the continuous SO(3) space, these meth-
ods usually show instability when facing rotations that are
rarely seen. To this end, we introduce RoITr, a Rotation-
Invariant Transformer to cope with the pose variations
in the point cloud matching task. We contribute both onthe local and global levels. Starting from the local level,
we introduce an attention mechanism embedded with Point
Pair Feature (PPF)-based coordinates to describe the pose-invariant geometry, upon which a novel attention-basedencoder-decoder architecture is constructed. We furtherpropose a global transformer with rotation-invariant cross-frame spatial awareness learned by the self-attention mech-anism, which significantly improves the feature distinctive-ness and makes the model robust with respect to the lowoverlap. Experiments are conducted on both the rigid andnon-rigid public benchmarks, where RoITr outperforms allthe state-of-the-art models by a considerable margin in thelow-overlapping scenarios. Especially when the rotations
are enlarged on the challenging 3DLoMatch benchmark,RoITr surpasses the existing methods by at least 13 and 5percentage points in terms of Inlier Ratio and RegistrationRecall, respectively. Code is publicly available
1.
| 1. Introduction
The correspondence estimation between a pair of
partially-overlapping point clouds is a long-standing taskthat lies at the core of many computer vision applica-tions, such as tracking [ 17,18], reconstruction [ 22,23,39],
pose estimation [ 19,27,50] and 3D representation learn-
ing [ 15,16,46], etc. In a typical solution, geometry is first
encoded into descriptors, and correspondences are then es-tablished between two frames by matching the most similar
descriptors. As the two frames are observed from different
views, depicting the same geometry under different trans-
1https://github.com/haoyu94/RoITrFigure 1. Feature Matching Recall (FMR) on 3DLoMatch [ 19]
and Rotated 3DLoMatch. Distance to the diagonal represents the
robustness against rotations. Among all the state-of-the-art ap-
proaches, RoITr not only ranks first on both benchmarks but alsoshows the best robustness against the enlarged rotations.
formations identically, i.e., the pose-invariance, becomes
the key to success in the point cloud matching task.
Since the side effects caused by a global translation can
always be easily eliminated, e.g., by aligning the barycen-
ter with the origin, the attention naturally shifts to coping
with the rotations. In the past, handcrafted local descrip-
tors [ 11,30,31,41] were designed to be rotation-invariant
so that the same geometry observed from different viewscan be correctly matched. With the emergence of deep neu-ral models for 3D point analysis, e.g., multilayer percep-
trons (MLPs)-based like PointNet [ 25,26], convolutions-
based like KPConv [ 6,40], and the attention-based like
PointTransformer [ 33,53], recent approaches [ 1,7,9,10,13,
19,20,27,32,48–51] propose to learn descriptors from raw
points as an alternative to handcrafted features that are lessrobust to occlusion and noise. The majority of deep pointmatchers [ 7,10,19,20,27,34,48,50–52] is sensitive to ro-
tations. Consequently, their invariance to rotations must beobtained extrinsically via augmented training to ensure thatthe same geometry under different poses can be depictedsimilarly. However, as the training cases can never span thecontinuous SO(3) space, they always suffer from instabil-
ity when facing rotations that are rarely seen during train-ing. This can be observed by a significant performance dropunder enlarged rotations at inference time. (See Fig. 1.)
There are other works [ 1,9,13,32,44] that only lever-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5384
age deep neural networks to encode the pure geometry with
the intrinsically-designed rotation invariance. However, theintrinsic rotation invariance comes at the cost of losingglobal context. For example, a human’s left and right halvesare almost identically described, which naturally degradesthe distinctiveness of features. Most recently, RIGA [ 49]
is proposed to enhance the distinctiveness of the rotation-invariant descriptors by incorporating a global context, e.g.,
the left and right halves of a human become distinguishableby knowing there is a chair on the left while a table on theright. However, it lacks a highly-representative geometryencoder since it relies on PointNet [ 25], which accounts for
an ineffective local geometry description. Moreover, as de-picting the cross-frame spatial relationships is non-trivial,previous works [ 19,27,34,50] merely leverage the contex-
tual features in the cross-frame context aggregation, whichneglects the positional information. Although RIGA pro-
poses to learn a rotation-invariant position representation by
leveraging an additional PointNet, this simple design is hardto model the complex cross-frame positional relationshipsand leads to less distinctive descriptors.
In this paper, we present Rotation- Invariant Transformer
(RoITr) to tackle the problem of point cloud matching un-der arbitrary pose variations. By using Point Pair Fea-tures (PPFs) as the local coordinates, we propose an at-tention mechanism to learn the pure geometry regardless
of the varying poses. Upon it, attention-based layers are
further proposed to compose the encoder-decoder architec-ture for highly-discriminative and rotation-invariant geom-etry encoding. We demonstrate its superiority over Point-Transformer [ 53], a state-of-the-art attention-based back-
bone network, in terms of both efficiency and efficacy inFig. 8and Tab. 4(a), respectively. On the global level, the
cross-frame position awareness is introduced in a rotation-invariant fashion to facilitate feature distinctiveness. We il-lustrate its significance over the state-of-the-art design [ 27]
in Tab. 4(d). Our main contributions are summarized as:
• An attention mechanism designed to disentangle the
geometry and poses, which enables the pose-agnosticgeometry description.
• An attention-based encoder-decoder architecture that
learns highly-representative local geometry in arotation-invariant fashion.
• A global transformer with rotation-invariant cross-
frame position awareness that significantly enhancesthe feature distinctiveness.
|
Zhang_LVQAC_Lattice_Vector_Quantization_Coupled_With_Spatially_Adaptive_Companding_for_CVPR_2023 | Abstract
Recently, numerous end-to-end optimized image com-
pression neural networks have been developed and proved
themselves as leaders in rate-distortion performance. The
main strength of these learnt compression methods is in
powerful nonlinear analysis and synthesis transforms that
can be facilitated by deep neural networks. However, out of
operational expediency, most of these end-to-end methods
adopt uniform scalar quantizers rather than vector quantiz-
ers, which are information-theoretically optimal. In this pa-
per, we present a novel Lattice Vector Quantization scheme
coupled with a spatially Adaptive Companding (LVQAC)
mapping. LVQ can better exploit the inter-feature depen-
dencies than scalar uniform quantization while being com-
putationally almost as simple as the latter. Moreover, to
improve the adaptability of LVQ to source statistics, we
couple a spatially adaptive companding (AC) mapping with
LVQ. The resulting LVQAC design can be easily embedded
into any end-to-end optimized image compression system.
Extensive experiments demonstrate that for any end-to-end
CNN image compression models, replacing uniform quan-
tizer by LVQAC achieves better rate-distortion performance
without significantly increasing the model complexity.
| 1. Introduction
In the past five years, the research on end-to-end CNN
image compression has made steady progress and led to
the birth of a new class of image compression methods
[1, 3, 4, 8, 21, 25, 26, 34–36, 40–44]. The CNN compres-
sion can now match and even exceed the rate-distortion
performance of the previous best image compression meth-
ods [6,33,37,45], which operate in the traditional paradigm
of linear transform, quantization and entropy coding.
The advantages of the CNN approach of data compres-
sion come from the nonlinearity of its analysis and syn-
thesis transforms of the autoencoder architecture, the end-
to-end joint optimization of the nonlinear transforms, uni-form quantization of the latent space and conditional en-
tropy coding (context-based arithmetic coding) of quantized
features.
Apparently, using uniform scalar quantizer in the above
CNN image compression framework is motivated by oper-
ational expediency more than other considerations. Only at
very high bit rate uniform quantization can approach the
rate-distortion optimality [15]. It is very difficult to di-
rectly adopt and optimize a vector quantizer (VQ) in the
end-to-end CNN architecture design for data compression,
because VQ is a discrete decision process and it is not com-
patible with variational backpropagation that is necessary to
the end-to-end CNN training. In [1], Agustsson et al. tried
to circumvent the difficulty by a so-called soft-to-hard vec-
tor quantization scheme. Their technique is a soft (continu-
ous) relaxation of discrete computations of VQ and entropy
so that their effects can be approximated in the end-to-end
training. However, in [1] the quantization centers are opti-
mized along with the other modules, which make the whole
system quite cumbersome and more difficult to train. In
this paper, we p ropose a novel Lattice Vector Quantiza-
tion scheme coupled with a spatially Adaptive Companding
(LVQAC) mapping. LVQ can better exploit the inter-feature
dependencies than scalar uniform quantization while be-
ing computationally almost as simple as the latter. Even if
the features to be compressed are statistically independent,
LVQ is still a more efficient coding strategy than scalar uni-
form quantization. This is because the former offers a more
efficient covering of high-dimensional space than the lat-
ter, as proven by the theory of sphere packings, lattices and
groups [11]. Moreover, to improve the adaptability of LVQ
to source statistics, we couple a spatially adaptive compand-
ing mapping with LVQ. The resulting LVQAC design is
computationally as simple and as amenable to the end-to-
end training of the CNN compression model as in the orig-
inally proposed framework of [3].
Consequently, for any end-to-end CNN image com-
pression models, replacing uniform quantizer by LVQAC
achieves better rate-distortion performance without sig-
nificantly increasing the model complexity; the simpler
This CVPR paper is the Open Access version, provided |
Zhang_MetaPortrait_Identity-Preserving_Talking_Head_Generation_With_Fast_Personalized_Adaptation_CVPR_2023 | Abstract
In this work, we propose an ID-preserving talking head
generation framework, which advances previous methods intwo aspects. First, as opposed to interpolating from sparse
flow, we claim that dense landmarks are crucial to achiev-
ing accurate geometry-aware flow fields. Second, inspiredby face-swapping methods, we adaptively fuse the source
identity during synthesis, so that the network better pre-serves the key characteristics of the image portrait. Al-
though the proposed model surpasses prior generation fi-delity on established benchmarks, personalized fine-tuning
is still needed to further make the talking head generation
qualified for real usage. However , this process is rather
computationally demanding that is unaffordable to stan-dard users. To alleviate this, we propose a fast adaptationmodel using a meta-learning approach. The learned modelcan be adapted to a high-quality personalized model as fast
as 30 seconds. Last but not least, a spatial-temporal en-
hancement module is proposed to improve the fine details
while ensuring temporal coherency. Extensive experiments
*Equal contribution, interns at Microsoft Research.
†Joint corresponding authors.prove the significant superiority of our approach over the
state of the arts in both one-shot and personalized settings.
| 1. Introduction
Talking head generation [ 1,6,7,24,27,31,33,39,41,47]
has found extensive applications in face-to-face live chat,virtual reality and virtual avatars in games and videos. Inthis paper, we aim to synthesize a realistic talking head with
a single source image (one-shot) that provides the appear-ance of a given person while being animatable according
to the motion of the driving person. Recently, considerable
progress has been made with neural rendering techniques,
bypassing the sophisticated 3D human modeling processand expensive driving sensors. While these works attain
increasing fidelity and higher rendering resolution, identity
preserving remains a challenging issue since the human vi-sion system is particularly sensitive to any nuanced devia-tion from the person’s facial geometry.
Prior arts mainly focus on learning a geometry-aware
warping field, either by interpolating from sparse 2D/3Dlandmarks or leveraging 3D face prior, e.g., 3D morphable
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22096
face model (3DMM) [ 2,3]. However, fine-grained facial
geometry may not be well described by a set of sparse land-marks or inaccurate face reconstruction. Indeed, the warp-ing field, trained in a self-supervised manner rather than us-ing accurate flow ground truth, can only model coarse ge-
ometry deformation, lacking the expressivity that captures
the subtle semantic characteristics of the portrait.
In this paper, we propose to better preserve the portrait
identity in two ways. First, we claim that dense facial land-
marks are sufficient for an accurate warping field predictionwithout the need for local affine transformation. Specifi-cally, we adopt a landmark prediction model [ 43] trained
on synthetic data [ 42], yielding 669 head landmarks that of-
fer significantly richer information on facial geometry. Inaddition, we build upon the face-swapping approach [ 23]
and propose to enhance the perceptual identity by atten-tionally fusing the identity feature of the source portraitwhile retaining the pose and expression of the intermedi-
ate warping. Equipped with these two improvements, our
one-shot model demonstrates a significant advantage over
prior works in terms of both image quality and perceptualidentity preservation when animating in-the-wild portraits.
While our one-shot talking head model has achieved
state-of-the-art quality, it is still infeasible to guarantee sat-isfactory synthesis results because such a one-shot setting isinherently ill-posed—one may never hallucinate the person-
specific facial shape and occluded content from a singlephoto. Hence, ultimately we encounter the uncanny val-
ley [32] that a user becomes uncomfortable as the synthe-
sis results approach to realism. To circumvent this, one
workaround is to finetune the model using several minutesof a personal video. Such personalized training has been
widely adopted in industry to ensure product-level quality,yet this process is computationally expensive, which greatly
limits its use scenarios. Thus, speeding up this personal-
ized training , a task previously under-explored, is of great
significance to the application of talking head synthesis.
We propose to achieve fast personalization with meta-
learning. The key idea is to find an initialization modelthat can be easily adapted to a given identity with limited
training iterations. To this end, we resort to a meta-learning
approach [ 9,26] that finds success in quickly learning dis-
criminative tasks, yet is rarely explored in generative tasks.
Specifically, we optimize the model for specific personal
data with a few iterations. In this way, we get a slightlyfine-tuned personal model towards which we move the ini-tialization model weight a little bit. Such meta-learned ini-
tialization allows us to train a personal model within 30 sec-onds, which is 3 times faster than a vanilla pretrained modelwhile requiring less amount of personal data.
Moreover, we propose a novel temporal super-resolution
network to enhance the resolution of the generated talkinghead video. To do this, we leverage the generative priorto boost the high-frequency details for portraits and mean-
while take into account adjacent frames that are helpful toreduce temporal flickering. Finally, we reach temporallycoherent video results of 512×512 resolution with com-
pelling facial details. In summary, this work innovates inthe following aspects:
• We propose a carefully designed framework to signif-
icantly improve the identity-preserving capability whenanimating a one-shot in-the-wild portrait.
• To the best of our knowledge, we are the first to explore
meta-learning to accelerate personalized training, thusobtaining ultra-high-quality results at affordable cost.
• Our novel video super-resolution model effectively en-
hances details without introducing temporal flickering.
|
Yeh_Meta-Personalizing_Vision-Language_Models_To_Find_Named_Instances_in_Video_CVPR_2023 | Abstract
Large-scale vision-language models (VLM) have shown
impressive results for language-guided search applications.
While these models allow category-level queries, they cur-
rently struggle with personalized searches for moments in
a video where a specific object instance such as “My dog
Biscuit” appears. We present the following three contribu-
tions to address this problem. First, we describe a method
tometa-personalize a pre-trained VLM, i.e., learning how to
learn to personalize a VLM at test time to search in video.
Our method extends the VLM’s token vocabulary by learn-
ing novel word embeddings specific to each instance. To
capture only instance-specific features, we represent each
instance embedding as a combination of shared and learned
global category features. Second, we propose to learn such
personalization without explicit human supervision. Our
approach automatically identifies moments of named visual
instances in video using transcripts and vision-language
similarity in the VLM’s embedding space. Finally, we intro-
duce This-Is-My, a personal video instance retrieval bench-
mark. We evaluate our approach on This-Is-My and Deep-
Fashion2 and show that we obtain a 15% relative improve-
ment over the state of the art on the latter dataset.
| 1. Introduction
The recent introduction of large-scale pre-trained vision-
language models (VLMs) has enabled many new vision
tasks, including zero-shot classification and retrieval [14,
18, 22], image/video generation [12, 24, 25, 27, 28, 32], or
language-guided question answering [1, 19, 42]. It is now
possible to search not only for specific object categories
(e.g., dogs) but also for more specific descriptions of both
†Equal advising.
∗Work done during CHY’s summer internship at Adobe Research.
2Czech Institute of Informatics, Robotics and Cybernetics at the
Czech Technical University in Prague.
Figure 1. Meta-Personalized Vision-Language Model (VLM)
to Retrieve Named Instances in Video. Given a video where a
user-specific instance, e.g., “My dog Biscuit” is mentioned, our
method automatically learns a representation for the user-specific
instance in the VLM’s text input space. The personalized VLM
can then be used to retrieve the learned instance in other contexts
through natural language queries, e.g.,<my dog Biscuit >
grabbing a pink frisbee . This result is enabled by meta-
personalizing the VLM on a large-scale dataset of narrated videos
by pre-learning shared global category tokens (in this example for
the category of ’dogs’), which are then easily personalized to user-
specific instances from only a few user-given training examples.
the object and scene attributes ( e.g., “A small white dog
playing at the dog park”). However, we often do not want
to search for just any example of a generic category but in-
stead to find a specific instance. For example, a user might
want to search their personal video library for all the scenes
that show their dog “Biscuit grabbing a pink frisbee”, as
illustrated in Figure 1. Since VLMs do not have a represen-
tation of “Biscuit,” such queries are beyond the capabilities
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19123
of off-the-shelf VLMs.
Recent work [5] proposed a method to extend the lan-
guage encoder’s vocabulary with a newly learned token that
represents a specific personal instance to address this is-
sue. While this approach enables language-guided search
for personal instances by placing the learned tokens in the
query prompt, their solution assumes a collection of man-
ually annotated images showing the individual instance in
various contexts for successful token learning. For this ap-
proach to work in practice, a user must manually annotate
all their important personal instances in various contexts,
such that the instance representation does not capture nui-
sance features, e.g., the background. We thus identify two
key challenges: 1) collecting personal instance examples
without explicit human labeling and 2) learning a gener-
alizable object-centric representation of personal instances
from very few examples.
The contributions of this work are three-fold. As our first
contribution, we propose a method to automatically identify
important personal instances in videos for personalizing a
vision-language model without explicit human annotations.
Indeed, people often record and refer to personal items or
relationships in videos found online. Our approach is thus
to identify mentions of personal instances in a video au-
tomatically and leverage these moments to build a set of
personal instances for training. To this end, we extract the
transcripts of videos using speech-to-text models and find
candidate moments by looking for occurrences of “this is
my *” or similar possessive adjective patterns. The symbol
* in this example could represent a single word or sequence
of words describing the instance ( e.g., *= “dog Biscuit”).
We then use vision-language similarity to filter non-visual
examples and to find additional occurrences in the video
for training For example, we found more than six thousand
named instances in 50K videos randomly sampled from the
Merlot Reserve dataset [44]. We call the resulting collection
of named instances in videos the This-Is-My dataset.
As our second contribution, we propose a novel model
and training procedure to learn text tokens representing
the named instances in video from possibly very few and
noisy training examples. Our method represents each in-
stance with learned tokens and models each token as a linear
combination of a set of pre-learned category-specific fea-
tures shared across different instances. This set of shared
category-specific features (similar to object attributes) im-
proves the generalization of our method by preventing the
instance representations from capturing nuisance features
(e.g., the scene background). Furthermore, we show how
to pre-train and adapt the shared category features using a
large set of automatically collected This-Is-My examples,
further improving our model’s few-shot personalization per-
formance at test-time. We call this pre-training of shared
category features meta-personalization . In contrast to priorwork [5], our method does not require training additional
neural network models and requires only the optimization
of a contrastive learning objective.
As our final contribution, we demonstrate and evalu-
ate our model on an existing fashion item retrieval bench-
mark, DeepFashion2, and our new challenging This-Is-My
video instance retrieval dataset4depicting specific object
instances across different videos and contexts. Our ex-
periments demonstrate that our method outperforms sev-
eral baselines and prior approaches on these challenging
language-guided instance retrieval tasks.
|
Ye_Affordance_Diffusion_Synthesizing_Hand-Object_Interactions_CVPR_2023 | Abstract
Recent successes in image synthesis are powered by
large-scale diffusion models. However, most methods are
currently limited to either text- or image-conditioned gen-
eration for synthesizing an entire image, texture transfer or
inserting objects into a user-specified region. In contrast,
in this work we focus on synthesizing complex interactions
(i.e., an articulated hand) with a given object. Given an
RGB image of an object, we aim to hallucinate plausible
images of a human hand interacting with it. We propose a
two-step generative approach: a LayoutNet that samples an
articulation-agnostic hand-object-interaction layout, and a
ContentNet that synthesizes images of a hand grasping the
object given the predicted layout. Both are built on top of
a large-scale pretrained diffusion model to make use of its
latent representation. Compared to baselines, the proposed
method is shown to generalize better to novel objects and
perform surprisingly well on out-of-distribution in-the-wild
scenes of portable-sized objects. The resulting system al-
lows us to predict descriptive affordance information, such
as hand articulation and approaching orientation.
*Yufei was an intern at NVIDIA during the project. | 1. Introduction
Consider the bottles, bowls and cups shown in the left
column of Figure 1. How might a human hand interact
with such objects? Not only is it easy to imagine, from
a single image, the types of interactions that might occur
(e.g., ‘grab/hold’), and the interaction locations that might
happen ( e.g. ‘handle/body’), but it is also quite natural to
hallucinate—in vivid detail— several ways in which a hand
might contact and use the objects. This ability to predict
and hallucinate hand-object-interactions (HOI) is critical to
functional understanding of a scene, as well as to visual im-
itation and manipulation.
Can current computer vision algorithms do the same? On
the one hand, there has been a lot of progress in image gen-
eration, such as synthesizing realistic high-resolution im-
ages spanning a wide range of object categories [43, 73]
from human faces to ImageNet classes. Newer diffusion
models such as Dall-E 2 [65] and Stable Diffusion [66] can
generate remarkably novel images in diverse styles. In fact,
highly-realistic HOI images can be synthesized from simple
text inputs such as “a hand holding a cup” [65, 66].
On the other hand, however, such models fail when con-
ditioned on an image of a particular object instance. Given
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22479
an image of an object, it remains an extremely challeng-
ing problem to generate realistic human object interaction.
Solving this problem requires (at least implicitly) an under-
standing of physical constraints such as collision and force
stability, as well as modeling the semantics and functional-
ity of objects — the underlying affordances [19]. For ex-
ample, the hand should prefer to grab the kettle handle but
avoid grabbing the knife blade. Furthermore, in order to
produce visually plausible results, it also requires modeling
occlusions between hands and objects, their scale, lighting,
texture, etc.
In this work, we propose a method for interaction syn-
thesis that addresses these issues using diffusion models. In
contrast to a generic image-conditioned diffusion model, we
build upon the classic idea of disentangling where to inter-
act (layout ) from how to interact ( content ) [25,30]. Our key
insight is that diverse interactions largely arise from hand-
object layout, whereas hand articulations are driven by lo-
cal object geometry. For example, a mug can be grasped
by either its handle or body, but once the grasping location
is determined, the placement of the fingers depends on the
object’s local surface and the articulation will exhibit only
subtle differences. We operationalize this idea by proposing
a two-step stochastic procedure: 1) a LayoutNet that gener-
ates 2D spatial arrangements of hands and objects, and 2)
aContentNet that is conditioned on the query object im-
age and the sampled HOI layout to synthesize the images
of hand-object interactions. These two modules are both
implemented as image-conditioned diffusion models.
We evaluate our method on HOI4D and EPIC-
KITCHEN [11, 48]. Our method outperforms generic im-
age generation baselines, and the extracted hand poses from
our HOI synthesis are favored in user studies against base-
lines that are trained to directly predict hand poses. We
also demonstrate surprisingly robust generalization ability
across datasets, and we show that our model can quickly
adapt to new hand-object-interactions with only a few ex-
amples. Lastly, we show that our proposed method enables
editing and guided generation from partially specified lay-
out parameters. This allows us to reuse heatmap prediction
from prior work [13, 56] and to generate consistent hand
sizes for different objects in one scene.
Our main contributions are summarized below: 1) we
propose a two-step method to synthesize hand-object in-
teractions from an object image, which allows affordance
information extracted from it; 2) we use inpainting tech-
inuqes to supervise the model with paired real-world HOI
and object-only images and propose a novel data augmenta-
tion method to alleviate overfit to artifacts; and 3) we show
that our approach generates realistic HOI images along with
plausible 3D poses and generalizes surprisingly well on out-
of-distribution scenes. 4) We also highlight several applica-
tions that would benefit from such a method. |
Yin_1_VS_100_Parameter-Efficient_Low_Rank_Adapter_for_Dense_Predictions_CVPR_2023 | Abstract
Fine-tuning large-scale pre-trained vision models to
downstream tasks is a standard technique for achieving
state-of-the-art performance on computer vision bench-
marks. However, fine-tuning the whole model with millions
of parameters is inefficient as it requires storing a same-
sized new model copy for each task. In this work, we pro-
pose LoRand, a method for fine-tuning large-scale vision
models with a better trade-off between task performance
and the number of trainable parameters. LoRand gener-
ates tiny adapter structures with low-rank synthesis while
keeping the original backbone parameters fixed, resulting
in high parameter sharing. To demonstrate LoRand’s ef-
fectiveness, we implement extensive experiments on object
detection, semantic segmentation, and instance segmenta-
tion tasks. By only training a small percentage (1% to 3%)
of the pre-trained backbone parameters, LoRand achieves
comparable performance to standard fine-tuning on COCO
and ADE20K and outperforms fine-tuning in low-resource
PASCAL VOC dataset.
| 1. Introduction
With the rapid development of computer vision, pa-
rameters in deep models are surging. Giant models need
to be trained with massive resources to achieve superior
performance [3, 17, 47, 58], which is often unavailable to
many academics and institutions. “Pretrain & Finetun-
ing” paradigm is widely used to alleviate this dilemma.
Teams with sufficient computation resources utilise enor-
mous datasets [2, 9, 40, 50] to train superior backbones
[4, 32, 40, 48] and optimise the models with ideal perfor-
mances. Models pretrained in this way usually have a su-
Corresponding author.
yEqual contribution.
46474849505152
0 20 40 60 80mAP
Trainable Backbone Params (M)LoRand(Swin -B)
2.5M
LoRand(Swin -S)
1.8M
LoRand(Swin -T)
0.9MSwin -B
88MSwin -S
50M
Swin -T
29M
DeiT -S
22M
ResNet -50
20MResNeXt -101-64
80MResNeXt -101-32
40M
ResNet -101
44MFigure 1. Comparisons of trainable backbone parameters between
our methods (red) and fine-tuning (black). In COCO, we achieve
advanced performances and outperform most existing backbones
with only 0.92.5M new backbone parameters (Cascade-RCNN
is employed as the detector). The fine-tuning paradigm produces
massive redundant backbone parameters, whereas our approach
saves over 97% of hardware resources with competitive perfor-
mances. The sizes of the circles intuitively compare the number of
trainable parameters.
perior understanding of homogeneous data. After that, re-
searchers with limited computational resources can trans-
fer the understanding capabilities of the pre-trained models
to downstream tasks with promising performances by fine-
tuning [1, 26, 46, 53].
However, the fine-tuned model will produce a new set
of parameters as large as the pre-trained model. New pa-
rameters are independent of the pre-trained models and un-
shareable, which are very hardware intensive for cloud ser-
vice providers [23, 49]. Figure 1 compares the parameter
quantities of some remarkable backbones and their perfor-
mances on the COCO [28] dataset. Recent advances in
natural language processing (NLP) [30, 38] show that large
pre-trained models trained with rich data have strong gener-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20116
Layer Norm
W/SW
-
MSA
LoRand
Module
Layer Norm
2x Feed
-
Forward
Layer
LoRand
Module
`
Multi
-
branch low
rank project
Nonlinearity
Fusion
Multi
-
branch low
rank project
Fusion
`
Swin
-
Transformer Block
LoRand LayerFigure 2. Architecture of the adapter module and its integration
with the Transformer. Left: We add two LoRand structures to
each SwinBlock located behind the W/SW-MSA and MLP struc-
tures respectively. Right: LoRand contains two Multi-branch low-
rank projections and nonlinearity. We include skip-connection to
LoRand to enhance its robustness.
alisability, which means most parameters in the pre-trained
models can be shared with the new tasks [22,36,37,44,59].
Moreover, recent literature demonstrates that the feature un-
derstanding of pre-trained models could be reduced when
they are fine-tuned in low-resource situations [12, 36]. To
tackle these issues, NLP researchers propose two new train-
ing paradigms based on pre-trained models: Adapter Tun-
ing [22] and Prompt Tuning [30], both of which tune the
new models by fixing the pre-trained parameters and adding
a few trainable structures (less than 10% of the backbone).
These paradigms create a new buzz in NLP and achieve im-
pressive performances which can be competitive with fine-
tuning [12, 22, 30, 36–38, 44, 59]. Advances in NLP also
shed new light on computer vision. Jia et al. [24] propose
Visual Prompt Tuning (VPT) and demonstrate that VPT
can outperform fine-tuning on image classification tasks by
training a small number of trainable parameters. Never-
theless, VPT shows weakness on more challenging dense
predictions like semantic segmentation compared with fine-
tuning [24].
To find a parameter-efficient paradigm with promising
performance in computer vision, we explore the poten-
tial of Adapter Tuning for visual dense predictions. We
employ the advanced Swin Transformer [32] trained with
ImageNet-22K [9] as the pre-trained model. After that, we
add bottleneck adapter structures [22] behind each Swin-
Block and freeze the original backbone parameters when
training, but this approach cannot achieve comparable per-
formance to fine-tuning as mentioned in [24]. In the exper-iments, we find that the models perform better with sparser
adapter structures. To improve the performance of Adapter
Tuning, we propose Low-Rank Adapter (LoRand) to re-
duce the adapter parameters, as shown in Figure 2. LoRand
sparsely parameterizes the matrices in adapters by low-rank
synthesis. Specifically, the projection matrix of the fully-
connected layer (FC) in LoRand is a product of multiple
low-rank matrices, which reduces FC parameters by more
than 80%. We implement extensive experiments on ob-
ject detection (PASCAL VOC [14]), semantic segmentation
(ADE20K [62]), and instance segmentation (MS COCO
[28]) to verify the capability of LoRand. Experimental re-
sults show that LoRand-Tuning is comparable to fine-tuning
on multiple tasks with only 1.8% to 2.8% new backbone
parameters, which suggests that the pre-trained backbone
parameters can be fully shared. More interestingly, our
method completely outperforms fine-tuning on the PAS-
CAL VOC dataset, illustrating that LoRand-Tuning can re-
duce the impairment of fine-tuning on pre-trained models in
low-resource configurations. Our method demonstrates that
the LoRand-Tuning paradigm can substantially save storage
resources and achieve competitive performances on most
dense prediction tasks. In summary, our contributions are
three-fold:
We demonstrate that visual pre-trained models are
highly generalisable and shareable. With our training
methods, new tasks require only a few trainable pa-
rameters to achieve performances comparable to fine-
tuning, which can save massive hardware resources.
We propose the LoRand structure for sparser adapters
based on low-rank synthesis. We demonstrate that the
backbone parameters in fine-tuning are highly redun-
dant, which can be replaced by 1.8% to 2.8% addi-
tional parameters in LoRand.
Extensive experiments on object detection, semantic
segmentation, and instance segmentation show that
LoRand-Tuning can achieve remarkable performances
and reduce massive new parameters in challenging
dense prediction tasks.
|
Yu_ACR_Attention_Collaboration-Based_Regressor_for_Arbitrary_Two-Hand_Reconstruction_CVPR_2023 | Abstract
Reconstructing two hands from monocular RGB images
is challenging due to frequent occlusion and mutual con-
fusion. Existing methods mainly learn an entangled rep-
resentation to encode two interacting hands, which are in-
credibly fragile to impaired interaction, such as truncated
hands, separate hands, or external occlusion. This pa-
per presents ACR (Attention Collaboration-based Regres-
sor), which makes the first attempt to reconstruct hands in
arbitrary scenarios. To achieve this, ACR explicitly miti-
gates interdependencies between hands and between parts
by leveraging center and part-based attention for feature
extraction. However, reducing interdependence helps re-
lease the input constraint while weakening the mutual rea-
soning about reconstructing the interacting hands. Thus,
based on center attention, ACR also learns cross-hand
prior that handle the interacting hands better. We eval-
uate our method on various types of hand reconstruction
datasets. Our method significantly outperforms the best
interacting-hand approaches on the InterHand2.6M dataset
while yielding comparable performance with the state-of-
the-art single-hand methods on the FreiHand dataset. More
qualitative results on in-the-wild and hand-object interac-
tion datasets and web images/videos further demonstrate
the effectiveness of our approach for arbitrary hand recon-
struction. Our code is available at this link1.
| 1. Introduction
3D hand pose and shape reconstruction based on a single
RGB camera plays an essential role in various emerging ap-
plications, such as augmented and virtual reality (AR/VR),
human-computer interaction, 3D character animation for
movies and games, etc. However, this task is highly chal-
lenging due to limited labeled data, occlusion, depth am-
biguity, etc. Earlier attempts [ 1,2,36,39] level down the
problem difficulty and focus on single-hand reconstruction.
These methods started from exploring weakly-supervised
*Corresponding author.
1https://github.com/ZhengdiYu/Arbitrary-Hands-3D-Reconstruction
Figure 1. Given a monocular RGB image, our method makes
the first attempt to reconstruct hands under arbitrary scenarios by
representation disentanglement and interaction mutual reasoning
while the previous state-of-the-art method IntagHand [ 15] failed.
learning paradigms [ 2] to designing more advanced network
models [ 31]. Although single-hand approaches can be ex-
tended to reconstruct two hands, they generally ignore the
inter-occlusion and confusion issues, thus failing to handle
two interacting hands.
To this end, recent research has shifted toward recon-
structing two interacting hands. Wang et al. [ 33] extract
multi-source complementary information to reconstruct two
interacting hands simultaneously. Rong et al. [ 27] and
Zhang et al. [ 35] first obtain initial prediction and stack in-
termediate results together to refine two-hand reconstruc-
tion. The latest work [ 15] gathers pyramid features and
two-hand features as input for a GCN-based network that
regresses two interacting hands unitedly. These methods
share the same principle: treating two hands as an integral
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12955
and learning a unified feature to ultimately refine or regress
the interacting-hand model. The strategy delivers the ad-
vantage of explicitly capturing the hands’ correlation but in-
evitably introduces the input constraint of two hands. This
limitation also makes the methods particularly vulnerable
and easily fail to handle inputs containing imperfect hand
interactions, including truncation or external occlusions.
This paper takes the first step toward reconstructing two
hands in arbitrary scenarios. Our first key insight is lever-
aging center and part attention to mitigate interdependen-
cies between hands and between parts to release the input
constraint and eliminate the prediction sensitivity to a small
occluded or truncated part. To this end, we propose At-
tention Collaboration-based Regressor (ACR). Specifically,
it comprises two essential ingredients: Attention Encoder
(AE) and Attention Collaboration-based Feature Aggrega-
tor (ACFA). The former learns hand-center and per-part at-
tention maps with a cross-hand prior map, allowing the net-
work to know the visibility of both hands and each part be-
fore the hand regression. The latter exploits the hand-center
and per-part attention to extract global and local features as
a collaborative representation for regressing each hand inde-
pendently and subsequently enhance the interaction model-
ing by cross-hand prior reasoning with an interaction field.
In contrast to the existing method, our method provides
more advantages, such as hand detector free. Furthermore,
experiments show that ACR achieves lower error on the In-
terHand2.6M dataset than the state-of-the-art interacting-
hand methods, demonstrating its effectiveness in handling
interaction challenges. Finally, results on in-the-wild im-
ages or video demos indicate that our approach is promis-
ing for real-world application with the powerful aggregated
representation for arbitrary hands reconstruction.
Our key contributions are summarized as: (1)we take
the first step toward reconstructing two hands at arbi-
trary scenarios .(2)We propose to leverage both cen-
ter and part based representation to mitigate interde-
pendencies between hands and between parts and re-
lease the input constraint. (3)In terms of modeling for in-
teracting hands, we propose a cross-hand prior reason-
ing module with an interaction field to adjust the de-
pendency strength. (4)Our method outperforms exist-
ing state-of-the-art approaches significantly on the In-
terHand2.6M benchmark . Furthermore, ACR is the most
practical method for various in-the-wild application scenes
among all the prior arts of hand reconstruction.
|
Zhang_Document_Image_Shadow_Removal_Guided_by_Color-Aware_Background_CVPR_2023 | Abstract
Existing works on document image shadow removal
mostly depend on learning and leveraging a constant back-
ground (the color of the paper) from the image. However,
the constant background is less representative and frequent-
ly ignores other background colors, such as the printed col-
ors, resulting in distorted results. In this paper, we present
a color-aware background extraction network (CBENet) for
extracting a spatially varying background image that ac-
curately depicts the background colors of the documen-
t. Furthermore, we propose a background-guided docu-
ment images shadow removal network (BGShadowNet) us-
ing the predicted spatially varying background as auxil-
iary information, which consists of two stages. At Stage
I, a background-constrained decoder is designed to pro-
mote a coarse result. Then, the coarse result is refined
with a background-based attention module (BAModule) to
maintain a consistent appearance and a detail improvement
module (DEModule) to enhance the texture details at Stage
II. Experiments on two benchmark datasets qualitatively
and quantitatively validate the superiority of the proposed
approach over state-of-the-arts.
| 1. Introduction
Documents, such as textbooks, newspapers, leaflets, and
receipts, are available daily, often saved as electronic doc-
uments for digital document archives or online message
transfer. Since the wide use and convenience of mobile
phones, people currently prefer to use mobile phones for
digital document copying. However, the captured docu-
ment images become highly susceptible to shadows when
the light sources are blocked. The low brightness in shadow
regions reduces the quality and readability of the documen-
Corresponding author: Chunxia Xiao ([email protected]).
(a) Shadow image
(b) Our background
(c) Our result
(d) Result of [2]
(e) Background of [24]
(f) Result of [24]
Figure 1. Document image shadow removal. With the assumption
of constant background, results of [2] and [24] may cause color
distortion or shadow remnant. By using a spatially varying back-
ground, our method can produce more desirable result.
t image, resulting in illegible content and unpleasant user
experience [2, 10, 12, 19, 28, 29]. Thus, shadow removal for
document images is a required image processing task in var-
ious vision applications [3, 4, 33, 37, 46, 48].
Although natural image shadow removal has made sub-
stantial progress [16, 20, 22, 40, 44, 45], these approach-
es generally perform poorly on document pictures due
to their drastically different features from natural images.
Natural image, for example, emphasizes background con-
tent (shadow-free image) without a shadow layer [6, 7, 13,
27, 41], whereas document image emphasizes text content
[2, 19, 30]. Without considering the particular properties of
the document image, traditional approaches to natural im-
age generally yield incorrect result when they are applied
to document image, as well as learning-based methods (see
Figure 9(c-f)) due to the less attention to the content struc-
tures.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1818
Several approaches on document image shadow removal
are currently available, which dig into the specific charac-
teristics of the document image [3, 21, 30, 49]. However,
these approaches may cause color distortion or shadow rem-
nant for image with complex backgrounds, as illustrated in
Figure 1(d). Recently, Lin et al. [24] estimate a constant
background for the image and propose BEDSR-Net for doc-
ument image shadow removal. The constant background is
the color of the paper (see Figure 1(e)), which ignores some
other background colors, such as the printed colors. The
constant background may provide inaccurate information
for the shadow removal task, resulting in unsatisfactory re-
sults (see Figure 1(f)). To address this problem, we propose
a color-aware background extraction network (CBENet) to
extract a spatially varying background, which can preserve
various background colors of the original image (see Figure
1(b)). The spatially varying background can provide more
useful color information, which contributes to image shad-
ow removal, as shown in Figure 1(c).
With the background image, we propose a background-
guided shadow removal network (BGShadowNet) for doc-
ument image that exploits the background image as aux-
iliary information. Figure 2 presents the framework of
the proposed BGShadowNet, which removes shadows in
a two-stage process. First, we introduce a background-
constrained decoder to combine background features with
image features, which can help to promote the realistic
coarse shadow-removal result. Then, we refine the coarse
result with a background-based attention module (BAMod-
ule) and a detail enhancement module (DEModule). In par-
ticular, BAModule is designed to eliminate the illumination
and color inconsistency in the image by using the attention
mechanism. Inspired by image histogram equalization, DE-
Module aims to enhance the detail features at low-level s-
cales.
Due to the lack of large-scale real document image
datasets, we construct a new dataset comprised of real doc-
ument images to facilitate the performance of document
image shadow removal. Experiments on extensive docu-
ment images and evaluations on two datasets verify that our
BGShadowNet outperforms existing approaches.
In summary, our main contributions are three-fold:
We present a color-aware background extraction net-
work (CBENet) for estimating a spatially varying
background image that guides the shadow removal of
document image.
We propose a framework named BGShadowNet for
document image shadow removal, which takes full ad-
vantage of the background image and is able to robust-
ly produce high-quality shadow removal results.
We design a background-based attention module
(BAModule) to maintain a consistent appearance and
a detail enhancement module (DEModule) to enhance
texture details. |
Yoon_Cross-Guided_Optimization_of_Radiance_Fields_With_Multi-View_Image_Super-Resolution_for_CVPR_2023 | Abstract
Novel View Synthesis (NVS) aims at synthesizing an im-
age from an arbitrary viewpoint using multi-view images
and camera poses. Among the methods for NVS, Neural
Radiance Fields (NeRF) is capable of NVS for an arbitrary
resolution as it learns a continuous volumetric representa-
tion. However, radiance fields rely heavily on the spectral
characteristics of coordinate-based networks. Thus, there
is a limit to improving the performance of high-resolution
novel view synthesis (HRNVS). To solve this problem, we
propose a novel framework using cross-guided optimiza-
tion of the single-image super-resolution (SISR) and radi-
ance fields. We perform multi-view image super-resolution
(MVSR) on train-view images during the radiance fields op-
timization process. It derives the updated SR result by fus-
ing the feature map obtained from SISR and voxel-based un-
certainty fields generated by integrated errors of train-view
images. By repeating the updates during radiance fields op-
timization, train-view images for radiance fields optimiza-
tion have multi-view consistency and high-frequency de-
tails simultaneously, ultimately improving the performance
of HRNVS. Experiments of HRNVS and MVSR on various
benchmark datasets show that the proposed method signifi-
cantly surpasses existing methods.
| 1. Introduction
Novel View Synthesis (NVS) is an approach to synthe-
sizing an image from an arbitrary viewpoint using multi-
view images and camera poses. This is an essential task in
computer vision and graphics, and it can be actively used
in street-view navigation, AR/VR, and robotics. Recently,
Neural Radiance Fields [28] (NeRF) significantly improved
the performance of NVS by learning multi-layer perceptron
(MLP) from 5d coordinate input. Since then, many studies
have been conducted to shorten the long learning time of
NeRF [4, 10, 29, 36, 42, 43, 48], increase the performance of
NVS using depth priors [5,8,32,41], and enable NVS from
few-shot views [13, 16, 31, 49].
Figure 1. Cross-guided optimization between single image super-
resolution and radiance fields. They complement weaknesses of
one another with their respective strengths by using the SR update
module, rendered train-view RGBs, and uncertainty maps.
Continuous scene representations such as NeRF [28] can
be rendered at arbitrary resolution. Thus, there are many
studies to improve the performance of multi-scale scene
representation. Mip-NeRF [2] proposes scale-dependent
positional encoding, which makes a network be trained on
multiple scales. In addition, BACON [22] proposes a net-
work capable of band-limited multi-scale decomposition by
giving a constraint to the bandwidth of network outputs.
Both papers showed significant down-scaling performance
on volume rendering. On the other hand, NeRF-SR [39]
improves the performance of high-resolution novel view
synthesis (HRNVS) by learning in an unsupervised manner
through super-sampling in the radiance fields optimization
process.
Radiance fields have the ability to find scene geometry
and optimize 5D functions simultaneously. Still, radiance
fields have a low ability to perform super-resolution, and
even if they synthesize high-resolution (HR) images, they
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12428
only depend on the characteristic of continuous scene repre-
sentation. On the other hand, single-image super-resolution
(SISR) generally specializes in learning the inverse func-
tion of image degradation. Therefore, SISR could be benefi-
cial to HRNVS by super-resolving train-view images; how-
ever, SISR is an ill-posed problem for which multiple so-
lutions exist, and multi-view consistency cannot be main-
tained when multi-view images are processed separately.
To solve this problem, we propose a novel framework us-
ing cross-guided optimization between radiance fields and
SISR. As shown in Fig. 1, our framework aims to ensure
that radiance fields are guided by superior high-frequency
details from SISR, and conversely, SISR is guided by multi-
view consistency from radiance fields. We perform train-
view synthesis during the radiance fields optimization pro-
cess. Then, we generate voxel-based uncertainty fields to
obtain uncertainty maps to find reliable regions in rendered
train-view RGB images. The rendered train-view outputs
and feature maps from the SISR network make it possible to
do multi-view image super-resolution (MVSR) through the
SR update module (SUM). Then, we continue optimizing
the radiance fields using the updated SR outputs. Repeating
the update process makes train-view images for radiance
fields optimization have multi-view consistency and high-
frequency details simultaneously, ultimately improving the
performance of HRNVS.
Our method shows that the performance of HRNVS and
MVSR on various benchmark datasets significantly sur-
passes existing methods. It also shows consistent perfor-
mance improvements for various SISR models and radiance
fields models in our method.
In summary, our contributions are as follows:
• We propose a novel framework for performing cross-
guided radiance fields optimization using the SISR model
for HRNVS.
• We propose voxel-based uncertainty fields to find reliable
regions of synthesized images.
• We propose an SR update module (SUM) using voxel-
based uncertainty fields and train-view synthesis outputs
for MVSR.
• Experiments on various benchmark datasets show that the
proposed method significantly surpasses existing meth-
ods in terms of performance for HRNVS and MVSR.
|
Yang_Efficient_On-Device_Training_via_Gradient_Filtering_CVPR_2023 | Abstract
Despite its importance for federated learning, continu-
ous learning and many other applications, on-device train-
ing remains an open problem for EdgeAI. The problem
stems from the large number of operations (e.g., floating
point multiplications and additions) and memory consump-
tion required during training by the back-propagation al-
gorithm. Consequently, in this paper, we propose a new
gradient filtering approach which enables on-device CNN
model training. More precisely, our approach creates a
special structure with fewer unique elements in the gradi-
ent map, thus significantly reducing the computational com-
plexity and memory consumption of back propagation dur-
ing training. Extensive experiments on image classification
and semantic segmentation with multiple CNN models (e.g.,
MobileNet, DeepLabV3, UPerNet) and devices (e.g., Rasp-
berry Pi and Jetson Nano) demonstrate the effectiveness
and wide applicability of our approach. For example, com-
pared to SOTA, we achieve up to 19 ×speedup and 77.1%
memory savings on ImageNet classification with only 0.1%
accuracy loss. Finally, our method is easy to implement and
deploy; over 20 ×speedup and 90% energy savings have
been observed compared to highly optimized baselines in
MKLDNN and CUDNN on NVIDIA Jetson Nano. Conse-
quently, our approach opens up a new direction of research
with a huge potential for on-device training.1
| 1. Introduction
Existing approaches for on-device training are neither
efficient nor practical enough to satisfy the resource con-
straints of edge devices (Figure 1). This is because these
methods do not properly address a fundamental problem in
on-device training, namely the computational and memory
complexity of the back-propagation (BP) algorithm. More
precisely, although the architecture modification [6] and
layer freezing [18, 20] can help skipping the BP for some
layers, for other layers, the complexity remains high. Gra-
1Code: https://github.com/SLDGroup/GradientFilter-CVPR23dient quantization [4, 7] can reduce the cost of arithmetic
operations but cannot reduce the number of operations ( e.g.,
multiplications); thus, the speedup in training remains lim-
ited. Moreover, gradient quantization is not supported
by existing deep-learning frameworks (e.g., CUDNN [9],
MKLDNN [1], PyTorch [25] and Tensorflow [2]). To en-
able on-device training, there are two important questions
must be addressed:
•How can we reduce the computational complexity of
back propagation through the convolution layers?
•How can we reduce the data required by the gradient
computation during back propagation?
In this paper, we propose gradient filtering , a new research
direction, to address both questions. By addressing the first
question, we reduce the computational complexity of train-
ing; by addressing the second question, we reduce the mem-
ory consumption.
In general, the gradient propagation through a convolu-
tion layer involves multiplying the gradient of the output
variable with a Jacobian matrix constructed with data from
either the input feature map or the convolution kernel. We
aim at simplifying this process with the new gradient filter-
ing approach proposed in Section 3. Intuitively, if the gradi-
ent map w.r.t. the output has the same value for all entries,
then the computation-intensive matrix multiplication can be
greatly simplified, and the data required to construct the Ja-
cobian matrix can be significantly reduced. Thus, our gra-
dient filtering can approximate the gradient w.r.t. the output
by creating a new gradient map with a special ( i.e., spatial)
structure and fewer unique elements. By doing so, the gra-
dient propagation through the convolution layers reduces to
cheaper operations, while the data required (hence memory)
for the forward propagation also lessens. Through this fil-
tering process, we trade off the gradient precision against
the computation complexity during BP. We note that gradi-
ent filtering does not necessarily lead to a worse precision,
i.e., models sometimes perform better with filtered gradi-
ents when compared against models trained with vanilla BP.
In summary, our contributions are as follows:
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3811
• We propose gradient filtering , which reduces the com-
putation and memory required for BP by more than
two orders of magnitude compared to the exact gradi-
ent calculation.
• We provide a rigorous error analysis which shows that
the errors introduced by the gradient filtering have only
a limited influence on model accuracy.
• Our experiments with multiple CNN models and com-
puter vision tasks show that we can train a neural net-
work with significantly less computation and memory
costs, with only a marginal accuracy loss compared to
baseline methods. Side-by-side comparisons against
other training acceleration techniques also suggest the
effectiveness of our method.
• Our method is easy to deploy with highly optimized
deep learning frameworks ( e.g., MKLDNN [1] and
CUDNN [9]). Evaluations on resource-constrained
edge (Raspberry Pi and Jetson Nano) and high-
performance devices (CPU/GPU) show that our
method is highly suitable for real life deployment.
The paper is organized as follows. Section 2 reviews rel-
evant work. Section 3 presents our method in detail. Sec-
tion 4 discusses error analysis, computation and memory
consumption. Experimental results are presented in Section
5. Finally, Section 6 summarizes our main contributions.
|
You_UTM_A_Unified_Multiple_Object_Tracking_Model_With_Identity-Aware_Feature_CVPR_2023 | Abstract
Recently, Multiple Object Tracking has achieved great
success, which consists of object detection, feature embed-
ding, and identity association. Existing methods apply the
three-step or two-step paradigm to generate robust trajec-
tories, where identity association is independent of other
components. However, the independent identity associa-
tion results in the identity-aware knowledge contained in
the tracklet not be used to boost the detection and embed-
ding modules. To overcome the limitations of existing meth-
ods, we introduce a novel Unified Tracking Model (UTM)
to bridge those three components for generating a positive
feedback loop with mutual benefits. The key insight of UTM
is the Identity-Aware Feature Enhancement (IAFE), which
is applied to bridge and benefit these three components
by utilizing the identity-aware knowledge to boost detec-
tion and embedding. Formally, IAFE contains the Identity-
Aware Boosting Attention (IABA) and the Identity-Aware
Erasing Attention (IAEA), where IABA enhances the consis-
tent regions between the current frame feature and identity-
aware knowledge, and IAEA suppresses the distracted re-
gions in the current frame feature. With better detections
and embeddings, higher-quality tracklets can also be gener-
ated. Extensive experiments of public and private detections
on three benchmarks demonstrate the robustness of UTM.
| 1. Introduction
Multiple Object Tracking (MOT) aims at locating and
identifying all of the moving objects in the video, which has
broad application prospects in visual surveillance, human-
computer interaction, virtual reality, and unmanned vehi-
cles. With the rapid development of object detection [12,
34, 35, 66], Tracking-By-Detection has became a favorite
*indicates corresponding author: Changsheng Xu.
(a) Separate Detection and Embedding (SDE)
Detector DetectionEmbedding
model Embedding TrackletIdentity
Association
(b) Joint Detection and Embedding (JDE)
BackboneTracklet
Identity
Association
Detection
Embedding
(c) Unified Tracking Model (UTM)
BackboneIAFE
Detection
Embedding
TrackletFigure 1. Comparison of different MOT frameworks in existing
methods. (a) Separate Detection and Embedding (SDE) [2]. (b)
Joint Detection and Embedding (JDE) [50]. (c) The proposed Uni-
fied Tracking Model (UTM) that constructed with Identity-Aware
Feature Enhancement (IAFE) module.
paradigm in MOT. Recently, a number of Tracking-By-
Detection approaches have been proposed, which can be
divided into two paradigms: Separate Detection and Em-
bedding (SDE) [1, 2, 4, 5, 8, 10, 16, 25, 38, 43, 52, 54], and
Joint Detection and Embedding (JDE) [15, 29, 45, 50, 64].
As illustrated in Figure 1(a), SDE can be divided into
three independent components: object detection, feature
embedding, and identity association1. Candidate bound-
ing boxes (bboxes) are obtained by standard detectors in
each frame first, then identity embedding of each bbox is
extracted by the re-identification algorithms, finally linked
across frames through identity association to generate tra-
1In this paper, identity association includes affinity computation and
data association.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21876
jectories. Therefore, SDE mainly exploits refining detec-
tion [2, 16], enhancing identity embedding [1, 25, 38, 52],
or designing robust association algorithms [5, 8, 54] to im-
prove tracking performance. With the maturity of multi-
task learning, some approaches [45, 50, 64] propose JDE
framework to reduce the computation cost. Different from
SDE, JDE applies the one-shot tracker to generate object
detections and corresponding visual embeddings simultane-
ously, thus treating MOT as two independent components,
shown in Figure 1(b). Since the identity association in the
above two paradigms is independent of object detection and
feature embedding, the significant clues contained in the
tracklet cannot be applied to enhance the detection and em-
bedding modules.
To address the above problem, a feasible idea is in-
troducing an auxiliary module to associate and propagate
identity-aware knowledge between the identity association
and the other two components. Therefore, we design a
novel Identity-Aware Feature Enhancement (IAFE) module
to achieve information interaction between different com-
ponents, shown in Figure 1(c). In detail, IAFE propagates
the identity-aware knowledge generated by identity asso-
ciation module to enhance the backbone feature of object
detection. Meanwhile, the enhanced backbone feature can
be utilized by the feature embedding module to generate
discriminative embeddings. With more accurate detections
and robust embeddings, the identity association module can
produce higher-quality tracklets. Therefore, object detec-
tion, feature embedding, and identity association are in-
volved with each other, thus forming a positive feedback
loop with mutual benefits.
As shown in Figure 2, the proposed Unified Tracking
Model (UTM) is composed of IAFE, detection branch,
embedding branch, identity association branch, and mem-
ory aggregation module. The detection and embedding
branches are applied to locate and identify each object of
the current frame, and the identity association branch ap-
plies graph matching to associate the candidate bboxes with
history tracklets. To achieve information interaction, IAFE
is proposed to bridge and benefit these three branches,
which utilizes the identity-aware knowledge to boost the
detection and embedding. Specifically, IAFE consists of
two modules: Identity-Aware Boosting Attention (IABA)
and Identity-Aware Erasing Attention (IAEA), where IABA
boosts the consistent regions between the current frame fea-
ture and identity-aware knowledge, and IAEA erases the
distracted regions in the current frame feature. With the
designed IAFE, UTM constructs a positive feedback loop
among all the three branches to improve the performance
of MOT. Furthermore, we introduce a memory aggregation
module to capture identity-aware knowledge through adap-
tively selecting features of history frames, which can allevi-
ate the effect of identity switches.The main contributions of the proposed method can be
summarized as follows:
• We design an Identity-Aware Feature Enhancement
(IAFE) module to bridge and benefit different com-
ponents in MOT. Specifically, it utilizes the identity-
aware knowledge to boost backbone feature, which is
further used to enhance the detection and embedding.
• With the proposed IAFE, we construct a Unified
Tracking Model (UTM) to form a positive feedback
loop with mutual benefits.
• The evaluation of public and private detections on
three benchmarks verifies the effectiveness and gen-
eralization ability of the proposed UTM.
|
Zhang_LOGO_A_Long-Form_Video_Dataset_for_Group_Action_Quality_Assessment_CVPR_2023 | Abstract
Action quality assessment (AQA) has become an emerg-
ing topic since it can be extensively applied in numerous
scenarios. However, most existing methods and datasets fo-
cus on single-person short-sequence scenes, hindering the
application of AQA in more complex situations. To address
this issue, we construct a new multi-person long-form video
dataset for action quality assessment named LOGO. Dis-
tinguished in scenario complexity, our dataset contains 200
videos from 26 artistic swimming events with 8 athletes in
each sample along with an average duration of 204.2 sec-
onds. As for richness in annotations, LOGO includes for-
mation labels to depict group information of multiple ath-
letes and detailed annotations on action procedures. Fur-
thermore, we propose a simple yet effective method to model
relations among athletes and reason about the potential
*indicates the corresponding author.temporal logic in long-form videos. Specifically, we de-
sign a group-aware attention module, which can be eas-
ily plugged into existing AQA methods, to enrich the clip-
wise representations based on contextual group informa-
tion. To benchmark LOGO, we systematically conduct in-
vestigations on the performance of several popular meth-
ods in AQA and action segmentation. The results reveal
the challenges our dataset brings. Extensive experiments
also show that our approach achieves state-of-the-art on
the LOGO dataset. The dataset and code will be released at
https://github.com/shiyi-zh0408/LOGO .
| 1. Introduction
Action quality assessment (AQA) is applicable to many
real-world contexts where people evaluate how well a spe-
cific action is performed such as sports events [13, 18, 29–
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2405
Table 1. Comparisons of LOGO and existing datasets of action quality assessment (upper part of the table) and group activity recognition
(lower part of the table). Score indicates the score annotations; Action denotes action types and temporal boundaries; Act.Label indicates
the action types of both individuals and groups. Bbox indicates bounding boxes for actors. Formation represents formation annotations.
Temp. indicates temporal boundary, Spat. indicates spatial localization.
Dataset Duration Avg.Dur. Anno.Type Samples Events Form.Anno. Year
MTL Dive [33] 15m,54s 6.0s Score 159 1 ✗ 2014
UNLV Dive [31] 23m,26s 3.8s Score 370 1 ✗ 2017
AQA-7-Dive [29] 37m,31s 4.1s Score 549 6 ✗ 2019
MTL-AQA [30] 96m,29s 4.1s Action,Score 1412 16 ✗ 2019
Rhythmic Gymnastics [53] 26h,23m,20s 1m,35s Score 1000 4 ✗ 2020
FSD-10 [26] - 3-30s Action,Score 1484 - ✗ 2020
FineDiving [50] 3h,30m,0s 4.2s Action,Score 3000 30 ✗ 2022
Collective Activity [7] - - Act.Label 44 - ✗ 2009
NCAA [35] 16h,9m,52s 4s Bbox,Act.Label 11436 - ✗ 2016
V olleyball [16] 2h,12m,1s 1.64s Bbox,Act.Label 4830 - ✗ 2016
FineGym [38] 161h,1m,45s 45.7s Act.Label,Temp. 12685 10 ✗ 2020
Multisports [25] 18h,34m,40s 20.9s Act.Label,Temp.,Spat. 3200 247 ✗ 2021
LOGO (Ours) 11h,20m,41s 3m,24s Action,Formation,Score 200 26 ✓(15764)
33, 46], healthcare [28, 39, 55–58], art performances, mil-
itary parades, and others. Due to the extensive applica-
tion of AQA, many efforts have been made over the past
few years. Although some existing works have achieved
promising performances in several simple scenarios, the ap-
plication of AQA in many situations is still difficult to im-
plement. In the data-driven era, the richness of the dataset
largely determines whether the model can be applied to a
wide range of scenarios or not. Inspired by this, we re-
viewed the existing datasets in AQA and concluded that
they are not rich enough for the following two reasons:
Simplicity of Scenarios. We argue that the complex-
ity of the AQA application scenes is reflected in two as-
pects, the number of people and the duration of the videos.
In snowboarding, for example, there is only one performer,
while there are multiple actors in a military parade. In div-
ing, the duration is only about 5 seconds, while in artis-
tic swimming and dance performances, the duration of the
action reaches several minutes. However, most existing
datasets contain a single performer in each sample and
many of them collect videos of 3-8s [29–31, 33, 50], which
makes it difficult for existing methods to model complex
scenes with more actors and longer duration. In such cases,
just focusing on how well actions are performed by each
individual may be insufficient. Relations among actors
should be built, and the potential temporal logic in long-
term videos should be modeled.
Coarse-grained Annotations. Though there have been
some long-form video datasets in AQA [49, 53] which pro-
vide more complex scenes, they typically contain the score
as the only annotation. Such coarse-grained annotationmakes it difficult for models to learn deeper information, es-
pecially in more complex situations. Simply judging action
quality via regressing a score for a long-term video could
be confusing since we cannot figure out how the model de-
termines whether an action is well-performed or not.
To address these issues, we propose a multi-person long-
form video dataset, LOGO (short for LOng-form GrOup).
With 8 actors in each sample, LOGO contains videos with
an average duration of 204.2s, much longer than most ex-
isting datasets in AQA, making the scenes more complex.
Besides, as shown in Figure 1, LOGO contains fine-grained
annotations including frame-wise action type labels and
temporal boundaries of actions for each video. We also
devise formation labels to depict relations among actors.
Specifically, we use a convex polygon to represent the for-
mation actors perform, which reflects their position infor-
mation and group information. In general, LOGO is distin-
guished by its scenario complexity, while it also provides
richer annotations compared to most existing datasets in
AQA [29–31, 33, 49, 50, 53].
Furthermore, we build a plug-and-play module, GOAT
(short for GrOup-aware ATtention), to bridge the gap be-
tween single-person short-sequence scenarios and multi-
person long-sequence scenarios. Specifically, in the spa-
tial domain, by building a graph for actors, we model the
relations among them. The nodes of this graph represent
actors’ features extracted from a CNN, and the edges repre-
sent the relations among actors. Then we use a graph con-
volution network (GCN) [19] to model the group features
from the graph. The optimized features of the graph then
serve as “ queries ” and “ keys” for GOAT. In the temporal do-
2406
main, after feature extraction by the video feature backbone,
the clip-wise features serve as “ values ” for GOAT. Instead
of fusing the features simply using the average pooling as
most previous works [44,50,52], GOAT learns the relations
among clips and models the temporal features of the long-
term videos based on the spatial information in every clip.
The contributions of this paper can be summarized as:
(1) We construct the first multi-person long-form video
dataset, LOGO, for action quality assessment. To the best
of our knowledge, LOGO stands out for its longer aver-
age duration, the larger number of people, and richer an-
notations when compared to most existing datasets. Ex-
perimental results also reveal the challenges our proposed
dataset brings. (2) We propose a plug-and-play group-aware
module, GOAT, which models the group information and
the temporal contextual relations for input videos, bridg-
ing the gap between single-person short-sequence scenarios
and multi-person long-sequence scenarios. (3) Experimen-
tal results demonstrate that our group-aware approach ob-
tains substantial improvements compared to existing meth-
ods in AQA and achieves the state-of-the-art.
|
Yang_Geometry_and_Uncertainty-Aware_3D_Point_Cloud_Class-Incremental_Semantic_Segmentation_CVPR_2023 | Abstract
Despite the significant recent progress made on 3D
point cloud semantic segmentation, the current methods re-quire training data for all classes at once, and are not suit-
able for real-life scenarios where new categories are be-
ing continuously discovered. Substantial memory storageand expensive re-training is required to update the modelto sequentially arriving data for new concepts. In thispaper , to continually learn new categories using previous
knowledge, we introduce class-incremental semantic seg-mentation of 3D point cloud. Unlike 2D images, 3D pointclouds are disordered and unstructured, making it difficult
to store and transfer knowledge especially when the pre-vious data is not available. We further face the challengeof semantic shift, where previous/future classes are indis-
criminately collapsed and treated as the background in thecurrent step, causing a dramatic performance drop on pastclasses. We exploit the structure of point cloud and pro-
pose two strategies to address these challenges. First, wedesign a geometry-aware distillation module that transferspoint-wise feature associations in terms of their geomet-
ric characteristics. To counter forgetting caused by the
semantic shift, we further develop an uncertainty-awarepseudo-labelling scheme that eliminates noise in uncertainpseudo-labels by label propagation within a local neighbor-hood. Our extensive experiments on S3DIS and ScanNet ina class-incremental setting show impressive results compa-
rable to the joint training strategy (upper bound). Code isavailable at: https://github.com/leolyj/3DPC-CISS
| 1. Introduction
The semantic segmentation of 3D point cloud plays a
crucial role in applications such as virtual reality, roboticsand autonomous vehicles. In recent years, a number of pointcloud segmentation methods [ 16,28,29,38,39,45]h a v e
Corresponding Author: Yinjie Lei ([email protected])Input Base Point Cloud Base ModelTraining on Base Classes { ceiling , floor , wall, table }
Training on Novel
Classes { chair , clutter }
Input Novel Point CloudLabels
Geometry-aware Feature-
relation Transfer ( GFT )
Sec. 3.2
Mixed Labels
ˇ
Novel LabelsPesudo Labels=base$
novel$
Novel ModelUncertainty-aware Pseudo-
label Generation ( UPG )
Sec. 3.3
Novel model for Inference on
both Base and Novel classesPesudo Labels Prediction Branch
Figure 1. To continually learn new categories without forgetting
the previous ones, we leverage Geometry-aware Feature-relationTransfer (Sec. 3.2) to distill point-wise relationships from the base
model and further employ Uncertainty-aware Pseudo-label Gener-ation (Sec. 3.3) to synthesize pseudo labels of old classes with low
uncertainty as guidance for novel model training.
achieved remarkable performance in the traditional setting
where all classes are learned at once. Nevertheless, newcategories are gradually discovered in real-life scenarios,
and updating the model to cater for these new categories re-
quires large memory storage and expensive re-training. Insuch situations, as illustrated in Fig. 1, class-incremental
learning provides a promising paradigm, since it enablesprogressively learning novel knowledge in an efficient man-
ner while preserving the previous capabilities.
The existing research on class-incremental learning is
mostly on 2D image classification [ 17,19,21,32] with some
efforts extended to RGB semantic segmentation [ 3,4,10,
41]. These methods employ a strategy based upon regu-
larization [11,19,44],rehearsal/replay [2,17,24,32,35]
orknowledge distillation [8,20,21] to preserve previous
knowledge. At present, only a few works have investigated
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21759
3D point clouds based incremental learning for classifica-
tion [ 5,6,9,22,43]. They focus on the classification of
an individual object and extend 2D methods to 3D. Unlikeclassification which only considers a single object (not thescene with multiple objects), continually learning to seg-
ment 3D point cloud in complex scenes introduces multiplenew challenges and has not been previously studied.
3D point clouds are disordered and unstructured, which
makes it difficult to preserve previous knowledge and re-sults in catastrophic forgetting [13,33,36]. This specifically
becomes pronounced when old data is not available. Wefurther observe that the 3D class-incremental segmentationfaces the phenomenon of semantic shift, where the pointsbelonging to old classes are indiscriminately collapsed intobackground during the current learning step. The semanticshift further suppresses the capability of the model to rec-ognize old categories, thus exacerbating forgetting.
In this paper, we are the first to propose a class-
incremental learning approach for 3D point cloud seman-tic segmentation. To prevent forgetting caused by unstruc-
tured point clouds, we design a Geometry-aware Feature-
relation Transfer (GFT) strategy to transfer the structural re-lationships among point features. Moreover, to address thesemantic shift issue, we assign uncertainty-aware pseudo-labels to the background points. Different from the con-ventional approaches, where pseudo-labels are directly ob-tained from the old model, we estimate uncertainties ac-cording to the distribution characteristics of points, andleverage the neighborhood information to propagate labelsfrom low to high uncertainties. Our Uncertainty-awarePseudo-label Generation (UPG), therefore, assists in elimi-nating the influence of noisy labels and helps tackle the se-
mantic shift issue. Note that our approach does not involve
any rehearsal or memory replay buffer to store old data orits annotations during the incremental process. We showpromises of our approach through comprehensive evalua-tions on benchmarks defined on public datasets i.e.,S3DIS
[1] and ScanNet [ 7]. Our key contributions are:
• A class-incremental learning framework for 3D point
cloud semantic segmentation, to sequentially adapt tonew classes from previous acquired knowledge.
• To transfer previous knowledge and prevent forgetting
caused by unstructured nature of the point clouds, wepropose a Geometry-aware Feature-relation Transfer
(GFT) module that captures the point-wise feature re-lations based on the geometric information.
• To tackle the semantic shift issue where old classes
are indiscriminately collapsed into the background, wedesign an Uncertainty-aware Pseudo-label Generation(UPG) strategy to enhance pseudo-labelling quality
and thus provide effective guidance for old classes.
• Compared with several baselines on multiple bench-marks, our approach achieves promising results for 3D
class-incremental semantic segmentation, closer to thejoint training (upper bound) using all data at once.
|
Yin_Multi-Space_Neural_Radiance_Fields_CVPR_2023 | Abstract
Existing Neural Radiance Fields (NeRF) methods suf-
fer from the existence of reflective objects, often result-
ing in blurry or distorted rendering. Instead of calculat-
ing a single radiance field, we propose a multi-space neu-
ral radiance field (MS-NeRF) that represents the scene us-
ing a group of feature fields in parallel sub-spaces, which
leads to a better understanding of the neural network to-
ward the existence of reflective and refractive objects. Our
multi-space scheme works as an enhancement to existing
NeRF methods, with only small computational overheads
needed for training and inferring the extra-space outputs.
We demonstrate the superiority and compatibility of our ap-
proach using three representative NeRF-based models, i.e.,
NeRF , Mip-NeRF , and Mip-NeRF 360. Comparisons are
performed on a novelly constructed dataset consisting of 25
synthetic scenes and 7 real captured scenes with complex
reflection and refraction, all having 360-degree viewpoints.
Extensive experiments show that our approach significantly
outperforms the existing single-space NeRF methods for
rendering high-quality scenes concerned with complex light
paths through mirror-like objects. Our code and dataset
will be publicly available at https://zx-yin.github.io/msnerf.
| 1. Introduction
Neural Radiance Fields (NeRF) [25] and its variants are
refreshing the community of neural rendering, and the po-
tential for more promising applications is still under ex-
ploration. NeRF represents scenes as continuous radiance
fields stored by simple Multi-layer Perceptrons (MLPs) and
renders novel views by integrating the densities and radi-
ance, which are queried from the MLPs by points sampled
along the ray from the camera to the image plane. Since its
first presentation [25], many efforts have been investigated
to enhance the method, such as extending to unbounded
scenes [2, 50], handling moving objects [29, 30, 37], or re-
constructing from pictures in the wild [6, 21, 35, 49].
*Bo Ren is the corresponding author.
(a) Mip-NeRF 360 [2], SSIM=0.825
(b) Our Model, SSIM=0.881
Figure 1. (a) Though Mip-NeRF 360 can handle unbounded
scenes, it still suffers from reflective surfaces, as the virtual images
violate the multi-view consistency, which is of vital importance
to NeRF-based methods. (b) Our method can help conventional
NeRF-like methods learn the virtual images with little extra cost.
However, rendering scenes with mirrors is still a chal-
lenging task for state-of-the-art NeRF-like methods. One of
the principle assumptions for the NeRF method is the multi-
view consistency property of the target scenes [16, 20, 36].
When there are mirrors in the space, if one allows the view-
points to move 360-degree around the scene, there is no
consistency between the front and back views of a mirror,
since the mirror surface and its reflected virtual image are
only visible from a small range of views. As a result, it is
often required to manually label the reflective surfaces in
order to avoid falling into sub-optimal convergences [12].
In this paper, we propose a novel multi-space NeRF-
based method to allow the automatic handling of mirror-like
objects in the 360-degree high-fidelity rendering of scenes
without any manual labeling. Instead of regarding the Eu-
clidean scene space as one single space, we treat it as com-
posed of multiple virtual sub-spaces, whose composition
changes according to location and view direction. We show
that our approach using such a multi-space decomposition
leads to successful handlements of complex reflections and
refractions where the multi-view consistency is heavily vi-
olated in the Euclidean real space. Furthermore, we show
that the above benefits can be achieved by designing a low-
cost multi-space module and replacing the original output
layer with it. Therefore, our multi-space approach serves as
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12407
a general enhancement to the NeRF-based backbone, equip-
ping most NeRF-like methods with the ability to model
complex reflection and refraction, as shown in Fig. 1.
Existing datasets have not paid enough attention to the
360-degree rendering of scenes containing mirror-like ob-
jects, such as RFFR [12] just has forward-facing scenes,
and the Shiny dataset in [42] with small viewpoints changes
and cannot exhibit view-dependent effects in large angle
scale. Therefore we construct a novel dataset dedicated
to evaluation for the 360-degree high-fidelity rendering of
scenes containing complex reflections and refractions. In
this dataset, we collect 25 synthesized scenes and 7 captured
real-world scenes. Each synthesized scene consists of 120
images of 360-degree around reflective or refractive objects,
with 100 randomly split for training, 10 for validation, and
10 for evaluation. Each real-world scene is captured ran-
domly around scenes with reflective and refractive objects,
consisting of 62 to 118 images, and organized under the
convention of LLFF [24]. We then demonstrate the supe-
riority and compatibility of our approach by comparisons,
using three representative baseline models, i.e., NeRF [25],
Mip-NeRF [1], and Mip-NeRF 360 [2], with and without
our multi-space module. Experiments show that our ap-
proach improves performance by a large margin both quan-
titatively and qualitatively on scenes with reflection and re-
fraction. Our main contributions are as follows:
• We propose a multi-space NeRF method that auto-
matically handles mirror-like objects in 360-degree
high-fidelity scene rendering, achieving significant im-
provements over existing representative baselines both
quantitatively and qualitatively.
• We design a lightweight module that can equip most
NeRF-like methods with the ability to model reflection
and refraction with small computational overheads.
• We construct a dataset dedicated to evaluation for the
360-degree high-fidelity rendering of scenes contain-
ing complex reflections and refractions, including 25
synthesized scenes and 7real captured scenes.
|
Yu_Hint-Aug_Drawing_Hints_From_Foundation_Vision_Transformers_Towards_Boosted_Few-Shot_CVPR_2023 | Abstract
Despite the growing demand for tuning foundation vision
transformers (FViTs) on downstream tasks, fully unleash-
ing FViTs’ potential under data-limited scenarios (e.g., few-
shot tuning) remains a challenge due to FViTs’ data-hungry
nature. Common data augmentation techniques fall short
in this context due to the limited features contained in the
few-shot tuning data. To tackle this challenge, we first iden-
tify an opportunity for FViTs in few-shot tuning: pretrained
FViTs themselves have already learned highly representa-
tive features from large-scale pretraining data, which are
fully preserved during widely used parameter-efficient tun-
ing. We thus hypothesize that leveraging those learned fea-
tures to augment the tuning data can boost the effective-
ness of few-shot FViT tuning. To this end, we propose a
framework called Hint -based Data Augmentation ( Hint-
Aug), which aims to boost FViT in few-shot tuning by aug-
menting the over-fitted parts of tuning samples with the
learned features of pretrained FViTs. Specifically, Hint-Aug
integrates two key enablers: (1) an Attentive Over-fitting
Detector ( AOD ) to detect over-confident patches of founda-
tion ViTs for potentially alleviating their over-fitting on the
few-shot tuning data and (2) a Confusion-based Feature
Infusion ( CFI) module to infuse easy-to-confuse features
from the pretrained FViTs with the over-confident patches
detected by the above AOD in order to enhance the feature
diversity during tuning. Extensive experiments and ablation
studies on five datasets and three parameter-efficient tuning
techniques consistently validate Hint-Aug’s effectiveness:
0.04%∼32.91% higher accuracy over the state-of-the-art
(SOTA) data augmentation method under various low-shot
settings. For example, on the Pet dataset, Hint-Aug achieves
a 2.22% higher accuracy with 50% less training data over
SOTA data augmentation methods.
| 1. Introduction
Foundation vision transformers (FViTs) [16, 41, 54, 55,
64] with billions of floating point operations (FLOPs) and
parameters have recently demonstrated significant poten-tial in various downstream tasks [40, 41]. The success of
FViTs has ushered in a new paradigm in deep learning:
pretraining-then-tuning [16, 40, 67], which first pretrains an
FViT on a large-scale dataset, then uses recently developed
parameter-efficient tuning methods (e.g., visual prompt tun-
ing (VPT) [34], visual prompting [2], LoRA [33], and
Adapter [72]) to tune pretrained FViTs on downstream tasks
with limited tuning data. However, although it is highly de-
sirable, effectively tuning pretrained FViTs for real-world
applications, especially under few-shot tuning scenarios, re-
mains a particularly challenging task. The reason is that al-
though parameter-efficient tuning methods are dedicatedly
designed for FViTs and can alleviate the overfitting issue by
reducing the number of trainable parameters [2, 34, 72], the
data-hungry nature of FViTs [16, 54] is not mitigated and
thus the achievable accuracy under data-limited scenarios
(e.g., few-shot tuning scenarios) are still limited. Therefore,
how to effectively tune pretrained FViTs on various down-
stream tasks with few-shot tuning is still an open question.
To enhance the effectiveness of parameter-efficient FViT
tuning under few-shot settings, one promising direction is to
leverage data augmentation techniques to increase the data
diversity and thus the feature diversity of the models when
being tuned on few-shot data, boosting the achievable accu-
racy [12, 31, 68, 71]. Nevertheless, it has been shown that
existing data augmentation techniques fall short in boost-
ing the model accuracy under few-shot tuning scenarios.
This is because most of the existing data augmentation tech-
niques are random-based (e.g., RandAugment [13], Au-
toAugment [12], color jitter, mixup [71], and cutmix [68]),
which only randomly permute existing features in the train-
ing data and thus cannot generate new and meaningful
features [63]. As illustrated in Fig. 1, we observe that
neither the widely-used random-based data augmentation
techniques (i.e., a dedicated combination of techniques in-
cluding RandAugment [13], color jitter, and random eras-
ing [74] as in [72]) nor training without data augmentation
can consistently achieve a satisfactory accuracy across dif-
†Our code is available at https://github.com/GATECH-
EIC/Hint-Aug
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
11102
4-shot on Aircraft
12-shot on Aircraft
16-shot on Aircraft
8-shot on Pets
12-shot on Pets16-shot on Pets4-shot on Food8-shot on Food12-shot on Food16-shot on Food 8-shot on AircraftNo-Aug Ours NPS
4-shot on PetsFigure 1. The normalized achieved accuracies when few-shot tun-
ing the ViT-base model [16] on various datasets and numbers of
tuning shots using (1) vanilla training without augmentation (i.e.,
No-Aug), (2) the SOTA parameter-efficient tuning technique [72]
(i.e., NPS), and (3) our proposed Hint-Aug.
ferent datasets under few-shot tuning. Specifically, when
being applied to fine-grained classification tasks, e.g., the
Aircraft dataset [45], these random-based data augmenta-
tion techniques actually hurt the achievable accuracy. The
reason is that random-based data augmentation techniques
can easily create out-of-manifold samples [68, 71], espe-
cially on commonly used fine-grained datasets. Such out-
of-manifold samples can largely degrade the achievable ac-
curacy given the limited number of training samples under
few-shot tuning scenarios [27]. Therefore, it is crucial to
develop data augmentation techniques that can adaptively
augment the given training samples with diverse, but still
within-manifold, features to boost the effectiveness of tun-
ing FViTs on various downstream tasks.
This work sets out to close the increasing gap be-
tween the growing demand for effective few-shot FViT tun-
ing and the unsatisfactory achievable accuracy by exist-
ing techniques. In particular, we identify that in few-shot
parameter-efficient tuning, the pretrained FViTs’ weights
are fixed during tuning. Meanwhile, existing works have
shown that (1) pretrained transformer models have already
learned complex but generalizable features [16, 41, 54] and
(2) gradient-based methods can extract the learned features
from pretrained models and then add them to the input
images [44, 46]. Therefore, we hypothesize that FViTs’
few-shot tuning accuracies can be non-trivially improved
by leveraging the learned features in the pretrained FViTs.
Specifically, we make the following contributions:
• We propose a framework called Hint -based Data
Augmentation ( Hint-Aug ), which is dedicated to boosting
the achievable accuracy of FViTs under few-shot tuning
scenarios by leveraging the learned features of pretrained
FViTs to guide the data augmentation strategy used for thetraining dataset in an input-adaptive manner.
• Our Hint-Aug framework integrates two enablers: (1) an
Attentive Over-fitting Detector (AOD) to identify the over-
fitting samples and patches in the given training dataset by
making use of the attention maps of pretrained FViTs and
(2) a Confusion-based Feature Infusion (CFI) module to
adaptively infuse pretrained FViTs’ learned features into
the training data to better tuning those models on down-
stream tasks, alleviating the commonly recognized chal-
lenge of having limited features under few-shot tuning.
• Extensive experiments and ablation studies on five datasets
and three parameter-efficient tuning techniques consis-
tently validate the effectiveness of our proposed Hint-Aug
framework, which achieves a 0.04%∼32.91% higher
accuracy over state-of-the-art (SOTA) data augmentation
methods [72] across different datasets and few-shot set-
tings. For example, on the Pets dataset, Hint-Aug achieves
a 2.22% higher accuracy with 50% less training data com-
pared with the SOTA augmentation method.
|
Zhang_GeoMVSNet_Learning_Multi-View_Stereo_With_Geometry_Perception_CVPR_2023 | Abstract
Recent cascade Multi-View Stereo (MVS) methods can
efficiently estimate high-resolution depth maps through nar-
rowing hypothesis ranges. However, previous methods ig-
nored the vital geometric information embedded in coarse
stages, leading to vulnerable cost matching and sub-optimal
reconstruction results. In this paper, we propose a ge-
ometry awareness model, termed GeoMVSNet , to explic-
itly integrate geometric clues implied in coarse stages for
delicate depth estimation. In particular, we design a two-
branch geometry fusion network to extract geometric pri-
ors from coarse estimations to enhance structural feature
extraction at finer stages. Besides, we embed the coarse
probability volumes, which encode valuable depth distri-
bution attributes, into the lightweight regularization net-
work to further strengthen depth-wise geometry intuition.
Meanwhile, we apply the frequency domain filtering to mit-
igate the negative impact of the high-frequency regions
and adopt the curriculum learning strategy to progres-
sively boost the geometry integration of the model. To in-
tensify the full-scene geometry perception of our model,
we present the depth distribution similarity loss based on
the Gaussian-Mixture Model assumption. Extensive exper-
iments on DTU and Tanks and Temples (T &T) datasets
demonstrate that our GeoMVSNet achieves state-of-the-art
results and ranks first on the T &T-Advanced set. Code is
available at https://github.com/doubleZ0108/GeoMVSNet.
| 1. Introduction
Multi-View Stereo (MVS) reconstructs the dense ge-
ometry representation of a scene from multiple overlap-
ping photographs, which is an influential branch of three-
dimensional (3D) computer vision and has been extensively
studied for decades. Learning-based MVS methods aggre-
gate cost volume from different viewpoints and use neu-
ral networks for cost regularization, which achieve superior
performance compared with traditional methods.
Recently, cascade-based architectures [7, 14, 54] havebeen widely applied. They compute different resolution
depth maps in a coarse-to-fine manner and progressively
narrow hypothesis plane guidance to reduce computational
complexity. However, these approaches do not take ad-
vantage of valuable insight contained in early phases and
only consider the pixel-wise depth attribute. Some meth-
ods, e.g. deformable kernel-based [47] and transformer-
based [4, 8, 22, 27, 46], introduce finely designed external
structures for feature extraction but do not fully exploit the
geometric clues embedded in the MVS scenarios.
Unlike existing works, we propose to explore the ge-
ometric structures embedded in coarse stages for delicate
estimations in finer stages. In particular, we build a two-
branch fusion network to integrate geometric priors con-
tained in coarse depth maps with ordinary features extracted
by the classic FPN [23], and the fused geometry awareness
features can provide solid foundations for robust aggrega-
tion. Meanwhile, coarse probability volumes with abundant
geometric structures are embedded into the regularization
network, and we replace the heavy 3D convolution with en-
hanced 2D regularization without degrading the quality of
depth-wise correlation, resulting in lightweight but robust
cost matching. However, MVS networks tend to produce
severe misestimation at high-frequency clutter textures due
to confused matching in coarse stages, which inevitably af-
fects explicit geometry perception. We are inspired by the
human behavior that a nearsighted person can still perceive
a scene well without glasses, even if the texture details can-
not be seen clearly. Based on the observation, we refer to
the idea of curriculum learning [2] to embed coarse geo-
metric priors into finer stages from easy to difficult. Specif-
ically, we utilize the frequency domain filtering strategy
to effectively alleviate redundant high-frequency textures
without producing more learning parameters and leverage
geometric structures embedded in different hierarchies of
frequency for gradually delicate depth estimation.
In addition, depth ranges of MVS scenarios are often
concentrated in several intervals, for this, we adopt the
Gaussian-Mixture Model to simulate full-scene depth dis-
tribution and PauTa Criterion [31] allows us to depict loca-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21508
tions that are too close or too far hidden in the long tailing of
the depth distribution curve, e.g. sky. The depth distribution
loss is proposed finally for full-scene similarity supervision.
In summary, the main contributions are as follows.
• We propose the geometric prior guided feature fusion
and the probability volume geometry embedding ap-
proaches for robust cost matching.
• We enhance geometry awareness via the frequency do-
main filtering strategy and adopt the idea of curriculum
learning for progressively introducing geometric clues
from easy to difficult.
• We model the depth distribution of MVS scenarios us-
ing the Gaussian-Mixture Model assumption and build
the full-scene geometry perception loss function.
• The proposed method is extensively evaluated on the
DTU dataset and both intermediate and advanced sets
of Tanks and Temples benchmark, all achieving brand-
new state-of-the-art performance.
|
Ye_Self-Supervised_Super-Plane_for_Neural_3D_Reconstruction_CVPR_2023 | Abstract
Neural implicit surface representation methods show
impressive reconstruction results but struggle to handle
texture-less planar regions that widely exist in indoor
scenes. Existing approaches addressing this leverage image
prior that requires assistive networks trained with large-
scale annotated datasets. In this work, we introduce a
self-supervised super-plane constraint by exploring the free
geometry cues from the predicted surface, which can fur-
ther regularize the reconstruction of plane regions without
any other ground truth annotations. Specifically, we in-
troduce an iterative training scheme, where (i) grouping
of pixels to formulate a super-plane (analogous to super-
pixels), and (ii) optimizing of the scene reconstruction net-
work via a super-plane constraint, are progressively con-
ducted. We demonstrate that the model trained with super-
planes surprisingly outperforms the one using conventional
annotated planes, as individual super-plane statistically oc-
cupies a larger area and leads to more stable training. Ex-
tensive experiments show that our self-supervised super-
plane constraint significantly improves 3D reconstruction
quality even better than using ground truth plane segmen-
tation. Additionally, the plane reconstruction results from
our model can be used for auto-labeling for other vision
tasks. The code and models are available at https:
//github.com/botaoye/S3PRecon .
| 1. Introduction
Reconstructing 3D scenes from multi-view RGB images
is an important but challenging task in computer vision,
which has numerous applications in autonomous driving,
virtual reality, robotics, etc. Existing matching-based meth-
ods [30, 31, 50] estimate per-view depth maps, which are
then fused to construct 3D representation. However, they do
not recover the depth of the scene in texture-less planar ar-
eas well (such as walls, floors, and other solid color planes),
which are abundant, especially for indoor scenes. Recent
data-driven methods [22, 26, 34, 38] alleviate this problem
to some extent by automatically learning geometric priors
from large-scale training data, but they either require nu-
Input View Surface Reconstruction Plane SegmentationFigure 1. Reconstruction and Plane Segmentation Results. Our
method can reconstruct smooth and complete planar regions by
employing the super-plane constraint and further obtain plane seg-
mentation in an unsupervised manner.
merous and expensive 3D supervision ( e.g., depth [22, 38],
normal [14], etc.) or lack fine-grain details [26, 34].
Recently, neural implicit representations have gained
much attention and shown impressive reconstruction results
without 3D supervision [39, 46, 47]. However, these meth-
ods purely rely on the photo-consistency to construct the
scene, which leads to texture-geometry ambiguities since
there are different plausible interpretations to satisfy this
objective. Several approaches address this problem by in-
troducing additional priors obtained from trained models
that can be considered as assistant networks . For instance,
ManhattanSDF [9] introduces the Manhattan assumption on
the floor and wall regions, which are predicted by a seman-
tic segmentation model. NeuRIS [37] and MonoSDF [48]
adopt additional normal supervision acquired from normal
prediction networks trained on large-scale labeled datasets.
Although these methods can regularize the reconstruction
process on indoor scenes, they all rely on large-scale an-
notated 2D or 3D datasets. In addition, these pretrained
geometric prediction networks are sensitive to different
scenes and not friendly across different domains or datasets.
For example, MonoSDF [48] reports that different normal
prediction networks significantly affect the reconstruction
quality. Thus, a natural question arises: can we improve the
RGB-based reconstruction results on texture-less regions
without any implicit supervision from assistant networks?
In this work, we propose a novel neural 3D reconstruc-
tion framework with the Self-Supervised Super-Plane con-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21415
straint, called S3P, which does not require any labeled data
or assistant networks. The intuition behind our approach
is simple: the constructed results provide a free geometry
cue, i.e., surface normal, which can be utilized to guide the
reconstruction process of planar regions. Specifically, pix-
els belonging to parallel planes tend to have similar normal
directions (shown in Fig 3). We group pixels sharing sim-
ilar normal values into the same cluster, which we call a
super-plane (analogous to a group of pixels is called super-
pixel). A super-plane constraint is then applied to force
normal directions within the same super-plane to be con-
sistent, thus constraining the reconstruction of large super-
plane regions. Due to the ambiguity of the prediction, espe-
cially at the early training stages, the grouped super-plane
can be inaccurate and introduces noisy self-supervision sig-
nals. Therefore, an automatic filtering strategy is further
designed to compare the normals of each pixel with the es-
timated super-plane normal, i.e., the average normal of all
pixels belonging to the same super-plane, and outliers with
large angular differences will be filtered out. We also detect
the discontinuity of the surface according to the geometry
and color features and mask out non-plane edge regions in
the super-plane segmentation maps.
Notably, by using normals for grouping, multiple paral-
lel planes will be grouped together, so that our super-planes
are typically larger than individual planes. This property is
particularly beneficial for volume rendering-based training
process [25] since planes with more pixels will also have
more stable and accurate averaged normals when limited
pixels are sampled in each iteration. We experimentally
verify this benefit: our super-plane constraint yields better
results than adopting ground truth plane segmentation.
As a by-product, self-supervised plane segmentation of
the reconstructed scene can be easily obtained from the
super-plane masks by extracting connected components
separated by the detected non-plane edge regions. Thus, our
approach can be extended to reconstruct planes of a scene
without ground truth supervision. It can also be applied to
label new scenes automatically for applications that require
such training data. The main contributions of this work are:
• We introduce a super-plane constraint for neural im-
plicit scene reconstruction by first generating super-
planes in an unsupervised manner and then performing
automatic outlier filtering.
• Our super-plane segmentation method can be further
extended to get unsupervised plane reconstruction re-
sults, which can be used as auto-labeling.
• Experimental results show that our method signifi-
cantly improves the reconstruction quality of texture-
less planar regions, and the unsupervised plane recon-
struction results are comparable to those from state-of-
the-art supervised methods.MethodsExplicit 3D
supervisionImplicit 2D/3D
supervisionHandle
texture-less
Patch Match-based MVS × × ×
Data-driven MVS ✓ × ✓
NeuS [39], V olSDF [46] × × ×
ManhattanSDF [9] × ✓(2D) ✓
NeuRIS [37], MonoSDF [48] × ✓(3D) ✓
Ours × × ✓
Table 1. Comparison between different reconstruction meth-
ods. Our method can handle texture-less planar regions without
implicit supervision provided by assistant networks.
|
Xu_OmniAvatar_Geometry-Guided_Controllable_3D_Head_Synthesis_CVPR_2023 | Abstract
We present OmniAvatar, a novel geometry-guided 3D
head synthesis model trained from in-the-wild unstructured
images that is capable of synthesizing diverse identity-
preserved 3D heads with compelling dynamic details un-
der full disentangled control over camera poses, facial ex-
pressions, head shapes, articulated neck and jaw poses. To
achieve such high level of disentangled control, we first ex-
plicitly define a novel semantic signed distance function
(SDF) around a head geometry (FLAME) conditioned on
the control parameters. This semantic SDF allows us to
build a differentiable volumetric correspondence map from
the observation space to a disentangled canonical space
from all the control parameters. We then leverage the 3D-
aware GAN framework (EG3D) to synthesize detailed shape
and appearance of 3D full heads in the canonical space, fol-
lowed by a volume rendering step guided by the volumetric
correspondence map to output into the observation space.
To ensure the control accuracy on the synthesized headshapes and expressions, we introduce a geometry prior loss
to conform to head SDF and a control loss to conform to the
expression code. Further, we enhance the temporal realism
with dynamic details conditioned upon varying expressions
and joint poses. Our model can synthesize more preferable
identity-preserved 3D heads with compelling dynamic de-
tails compared to the state-of-the-art methods both qualita-
tively and quantitatively. We also provide an ablation study
to justify many of our system design choices.
| 1. Introduction
Photo-realistic face image synthesis, editing and ani-
mation attract significant interests in computer vision and
graphics, with a wide range of important downstream appli-
cations in visual effects, digital avatars, telepresence and
many others. With the advent of Generative Adversar-
ial Networks (GANs) [15], remarkable progress has been
achieved in face image synthesis by StyleGAN [23–25]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12814
as well as in semantic and style editing for face im-
ages [46, 54]. To manipulate and animate the expressions
and poses in face images, many methods attempted to lever-
age 3D parametric face models, such as 3D Morphable
Models (3DMMs) [6, 40], with StyleGAN-based synthesis
models [10, 41, 53]. However, all these methods operate on
2D convolutional networks (CNNs) without explicitly en-
forcing the underlying 3D face structure. Therefore they
cannot strictly maintain the 3D consistency when synthe-
sizing faces under different poses and expressions.
Recently, a line of work has explored neural 3D rep-
resentations by unsupervised training of 3D-aware GANs
from in-the-wild unstructured images [7, 8, 11, 17, 37, 44,
45, 48, 61, 62, 67]. Among them, methods with generative
Neural Radiance Fields (NeRFs) [33] have demonstrated
striking quality and multi-view-consistent image synthesis
[7,11,17,37,48]. The progress is largely due to the integra-
tion of the power of StyleGAN in photo-realistic image syn-
thesis and NeRF representation in 3D scene modeling with
view-consistent volumetric rendering. Nevertheless, these
methods lack precise 3D control over the generated faces
beyond camera pose, as well as the quality and consistency
in control over other attributes, such as shape, expression,
neck and jaw pose, leave much to be desired.
In this work, we present OmniAvatar , a novel geometry-
guided 3D head synthesis model trained from in-the-wild
unstructured images. Our model can synthesize a wide
range of 3D human heads with full control over camera
poses, facial expressions, head shapes, articulated neck and
jaw poses. To achieve such high level of disentangled con-
trol for 3D human head synthesis, we devise our model
learning in two stages . We first define a novel semantic
signed distance function (SDF) around a head geometry (i.e.
FLAME [29]) conditioned on its control parameters. This
semantic SDF fully distills rich 3D geometric prior knowl-
edge from the statistical FLAME model and allows us to
build a differentiable volumetric correspondence map from
theobservation space to a disentangled canonical space
from all the control parameters. In the second training stage,
we then leverage the state-of-the-art 3D GAN framework
(EG3D [7]) to synthesize realistic shape and appearance of
3D human heads in the canonical space, including the mod-
eling of hair and apparels. Following that, a volume render-
ing step is guided by the volumetric correspondence map to
output the geometry and image in the observation space.
To ensure the consistency of synthesized 3D head shape
with controlling head geometry, we introduce a geometry
prior loss to minimize the difference between the synthe-
sized neural density field and the FLAME head SDF in ob-
servation space. Furthermore, to improve the control ac-
curacy, we pre-train an image encoder of the control pa-
rameters and formulate a control loss to ensure synthesized
images matching the input control code upon encoding. An-other key aspect of synthesis realism is dynamic details such
as wrinkles and varying shading as subjects change expres-
sions and poses. To synthesize dynamic details, we propose
to condition EG3D’s triplane feature decoding with noised
controlling expression.
Compare to state-of-the-art methods, our method
achieves superior synthesized image quality in terms of
Frechet Inception Distance (FID) and Kernel Inception Dis-
tance (KID). Our method can consistently preserve the iden-
tity of synthesized subjects with compelling dynamic de-
tails while changing expressions and poses, outperforming
prior methods both quantitatively and qualitatively.
The contributions of our work can be summarized as:
• A novel geometry-guided 3D GAN framework for
high-quality 3D head synthesis with full control on
camera poses, facial expressions, head shapes, artic-
ulated neck and jaw poses.
• A novel semantic SDF formulation that defines the vol-
umetric correspondence map from observation space
to canonical space and allows full disentanglement of
control parameters in 3D GAN training.
• A geometric prior loss and a control loss to ensure the
head shape and expression synthesis accuracy.
• A robust noised expression conditioning scheme to en-
able dynamic detail synthesis.
|