title
stringlengths 28
135
| abstract
stringlengths 0
12k
| introduction
stringlengths 0
12k
|
---|---|---|
Yao_Teacher-Generated_Spatial-Attention_Labels_Boost_Robustness_and_Accuracy_of_Contrastive_Models_CVPR_2023 | Abstract
Human spatial attention conveys information about the
regions of visual scenes that are important for perform-
ing visual tasks. Prior work has shown that the informa-
tion about human attention can be leveraged to benefit var-
ious supervised vision tasks. Might providing this weak
form of supervision be useful for self-supervised represen-
tation learning? Addressing this question requires collect-
ing large datasets with human attention labels. Yet, col-
lecting such large scale data is very expensive. To address
this challenge, we construct an auxiliary teacher model to
predict human attention, trained on a relatively small la-
beled dataset. This teacher model allows us to generate im-
age (pseudo) attention labels for ImageNet. We then train
a model with a primary contrastive objective; to this stan-
dard configuration, we add a simple output head trained
to predict the attention map for each image, guided by the
pseudo labels from teacher model. We measure the qual-
ity of learned representations by evaluating classification
performance from the frozen learned embeddings as well as
performance on image retrieval tasks (see supplementary
material). We find that the spatial-attention maps predicted
from the contrastive model trained with teacher guidance
aligns better with human attention compared to vanilla con-
trastive models. Moreover, we find that our approach im-
proves classification accuracy and robustness of the con-
trastive models on ImageNet and ImageNet-C. Further, we
find that model representations become more useful for im-
age retrieval task as measured by precision-recall perfor-
mance on ImageNet, ImageNet-C, CIFAR10, and CIFAR10-
C datasets.
Figure 1. Illustration. A teacher model is trained to predict human
spatial-attention from a small dataset. Then the model is used to
provide attention labels for larger dataset, which are used as addi-
tional targets for contrastive models.
| 1. Introduction
Deep learning models have made significant progress
and obtained notable success on various vision tasks. De-
spite these promising results, humans continue to perform
better than deep learning models in many applications. A
notable reason is that deep learning models have a tendency
to learn “short-cuts”, i.e., giving significance to physically
meaningless patterns or exploiting features which are pre-
dictive in some settings, but not causal [ 20]. Examples
include focusing on less significant features such as back-
ground and textures [ 13]. These models yield representa-
tions that are less generalizable and lead to models that are
highly sensitive to small pixel modulations [ 42].
Human vision on the other hand is known to be much
more robust and generalizable. One major difference be-
tween human and machine vision is that humans tend to
⇤Equal technical contribution.
†Equal leadership and advising contribution
Correspondence to:
[email protected] &[email protected]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23282
focus on specific regions in visual scene [ 45]. These lo-
cations often reflect regions salient or useful to perform a
specific vision task. Machines, instead, initially place equal
significance to all regions. A natural question is: will it
be beneficial if machine vision models is guided by human
spatial attention?
Human spatial attention has been shown to benefit com-
puter vision models in supervised tasks, such as classifica-
tion [ 32]. Yet, it is still a question whether adding a form
of weak supervision in the form of human spatial atten-
tion could similarly benefit self-supervised models that are
trained end-to-end. Self-supervised models typically need
a large amount of data to yield good representations. To
test if training weakly supervised models with human spa-
tial attention cues, we will need to collect a large volume
of human spatial attention labels, which is a very expen-
sive process that requires either using trackers to record eye
movements [ 5,43,52] or asking humans to highlight regions
that they attend to [ 25,27]. This process is prohibitively te-
dious and costly for datasets with millions of examples.
In this work, we test the hypothesis that a weak super-
vision in the form of human spatial attention is beneficial
for representation learning for models trained with a con-
trastive objective. Inspired by knowledge distillation and
self-training using teacher models [ 47,49], we address the
challenge of obtaining spatial attention labels on large scale
image datasets by using machine pseudo-labeling. We train
a teacher model on a set of limited ground truth human spa-
tial attention labels, and use this teacher model to gener-
ate spatial attention pseudo-labels for the large ImageNet
benchmark. We are then able to utilize the generated spa-
tial attention maps in the contrastive models, and discover
that this approach yields representations that are highly pre-
dictive of human spatial attention. Further, we find that the
learned representations are better as measured by higher ac-
curacy and robustness on classification downstream tasks,
and higher precision and recall on image retrieval tasks. In-
terestingly, we find that the gains from using teacher mod-
els to provide pseudo labels are larger than using the lim-
ited ground truth human labels directly when training con-
trastive models, and the gains are larger for contrastive mod-
els than when applying same method to supervised models.
In summary, our contributions are as follows:
•We create a dataset with spatial attention maps for the
ImageNet [ 37] benchmark by first training a teacher
model to predict human spatial attention labels from
Salicon dataset [ 25] and then use the model to label
ImageNet examples
•We use spatial-attention labels from the teacher model
as an additional prediction target to models trained
Trained teacher model is available at:
https://github.com/google-research/google-research/tree/master/human attention/with contrastive objective.
•We find that the proposed method can learn bet-
ter representation, leading to better accuracy and ro-
bustness for downstream classification tasks (on Im-
ageNet and ImageNet-C), and better performance on
retrieval tasks (on ImageNet, ImageNet-C, CIFAR-10,
and CIFAR10-C).
|
Yang_FreeNeRF_Improving_Few-Shot_Neural_Rendering_With_Free_Frequency_Regularization_CVPR_2023 | Abstract
Novel view synthesis with sparse inputs is a challeng-
ing problem for neural radiance fields (NeRF). Recent ef-
forts alleviate this challenge by introducing external super-
vision, such as pre-trained models and extra depth signals,
or by using non-trivial patch-based rendering. In this pa-
per, we present Frequency regularized NeRF (FreeNeRF),
a surprisingly simple baseline that outperforms previous
methods with minimal modifications to plain NeRF . We an-
alyze the key challenges in few-shot neural rendering and
find that frequency plays an important role in NeRF’s train-
ing. Based on this analysis, we propose two regularization
terms: one to regularize the frequency range of NeRF’s
inputs, and the other to penalize the near-camera density
fields. Both techniques are “free lunches” that come at noadditional computational cost. We demonstrate that even
with just one line of code change, the original NeRF can
achieve similar performance to other complicated methods
in the few-shot setting. FreeNeRF achieves state-of-the-
art performance across diverse datasets, including Blender,
DTU, and LLFF . We hope that this simple baseline will mo-
tivate a rethinking of the fundamental role of frequency in
NeRF’s training, under both the low-data regime and be-
yond. This project is released at FreeNeRF .
| 1. Introduction
Neural Radiance Field (NeRF) [ 21] has gained tremen-
dous attention in 3D computer vision and computer graph-
ics due to its ability to render high-fidelity novel views.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8254
However, NeRF is prone to overfitting to training views and
struggles with novel view synthesis when only a few inputs
are available. We term this view synthesis from sparse in-
puts problem as a few-shot neural rendering problem.
Existing methods address this challenge using different
strategies. Transfer learning methods, e.g., PixelNerf [ 37]
and MVSNeRF [ 4], pre-train on large-scale curated multi-
view datasets and further incorporate per-scene optimiza-
tion at test time. Depth-supervised methods [ 6,29] in-
troduce estimated depth as an external supervisory signal,
leading to a complex training pipeline. Patch-based reg-
ularization methods impose regularization from different
sources on rendered patches, e.g., semantic consistency reg-
ularization [ 11], geometry regularization [ 8,22], and ap-
pearance regularization [ 22], all at the cost of computation
overhead since an additional, non-trivial number of patches
must be rendered during training [ 8,11,22].
In this work, we find that a plain NeRF can work sur-
prisingly well with none of the above strategies in the few-
shot setting by adding (approximately) as few as oneline
of code (see Fig. 1). Concretely, we analyze the common
failure modes in training NeRF under a low-data regime.
Drawing on this analysis, we propose two regularization
terms. One is frequency regularization, which directly reg-
ularizes the visible frequency bands of NeRF’s inputs to
stabilize the learning process and avoid catastrophic over-
fitting at the start of training. The other is occlusion reg-
ularization, which penalizes the near-camera density fields
that cause “floaters,” another failure mode in the few-shot
neural rendering problem. Combined, we call our method
Frequency regularized NeRF (FreeNeRF), which is “free”
in two ways. First, it is dependency-free because it requires
neither costly pre-training [ 4,11,22,37] nor extra supervi-
sory signals [ 6,29]. Second, it is overhead-free as it requires
no additional training-time rendering for patch-based regu-
larization [ 8,11,22].
We consider FreeNeRF a simple baseline (with mini-
mal modifications to a plain NeRF) in the few-shot neural
rendering problem, although it already outperforms exist-
ing state-of-the-art methods on multiple datasets, including
Blender, DTU, and LLFF, at almost no additional computa-
tion cost. Our contributions can be summarized as follows:
•We reveal the link between the failure of few-shot neu-
ral rendering and the frequency of positional encoding,
which is further verified by an empirical study and ad-
dressed by our proposed method. To our knowledge, our
method is the first attempt to address few-shot neural ren-
dering from a frequency perspective.
•We identify another common failure pattern in learning
NeRF from sparse inputs and alleviate it with a new oc-
clusion regularizer. This regularizer effectively improves
performance and generalizes across datasets.
•Combined, we introduce a simple baseline, FreeNeRF,that can be implemented with a few lines of code mod-
ification while outperforming previous state-of-the-art
methods. Our method is dependency-free and overhead-
free, making it a practical and efficient solution to this
problem.
We hope the observations and discussions in this paper
will motivate people to rethink the fundamental role of fre-
quency in NeRF’s positional encoding.
|
Yoshimura_Rawgment_Noise-Accounted_RAW_Augmentation_Enables_Recognition_in_a_Wide_Variety_CVPR_2023 | Abstract
Image recognition models that work in challenging en-
vironments (e.g., extremely dark, blurry, or high dynamic
range conditions) must be useful. However, creating train-
ing datasets for such environments is expensive and hard
due to the difficulties of data collection and annotation. It
is desirable if we could get a robust model without the need
for hard-to-obtain datasets. One simple approach is to ap-
ply data augmentation such as color jitter and blur to stan-
dard RGB (sRGB) images in simple scenes. Unfortunately,
this approach struggles to yield realistic images in terms of
pixel intensity and noise distribution due to not consider-
ing the non-linearity of Image Signal Processors (ISPs) and
noise characteristics of image sensors. Instead, we propose
a noise-accounted RAW image augmentation method. In
essence, color jitter and blur augmentation are applied to a
RAW image before applying non-linear ISP , resulting in re-
alistic intensity. Furthermore, we introduce a noise amount
alignment method that calibrates the domain gap in the
noise property caused by the augmentation. We show that
our proposed noise-accounted RAW augmentation method
doubles the image recognition accuracy in challenging en-
vironments only with simple training data.
| 1. Introduction
Although image recognition has been actively studied,
its performance in challenging environments still needs im-
provement [15]. Sensitive applications such as mobility
sensing and head-mounted wearables need to be robust to
various kinds of difficulties, including low light, high dy-
namic range (HDR) illuminance, motion blur, and cam-
era shake. One possible solution is to use image enhance-
ment and restoration methods. A lot of DNN-based low-
light image enhancement [12, 20, 29, 30, 46, 54], denois-
ing [32, 43, 53], and deblurring [43, 48, 52] methods are
proposed to improve the pre-captured sRGB image qual-
ity. While they are useful for improving pre-captured image
unrealistic intensity
ISP(Contrast)
augmentation
(Contrast)
augmentation
ISP
unrealistic noise
(a) Usual training pipeline
RAW image
luminance+noise
realistic intensity
realistic noise
(b) Noise-accounted RAW augmentation pipeline
RAW image
luminance+noise
Noise correction
Figure 1. The concept of the proposed noise-accounted RAW aug-
mentation. Conventional augmentation (a) is applied to the output
of an ISP; due to the nonlinear operations in the ISP, it produces
images that cannot be captured at any ambient light intensities.
Instead, ours (b) applies augmentation before an ISP. It generates
realistic pixel intensity distribution that can be captured when the
light intensity is different. Moreover, the noise amount is also cor-
rected to minimize the domain gap between real and augmented
ones.
quality, a recent work [15] shows that using them as prepro-
cessing for image recognition models has limited accuracy
gains since they already lost some information, and restor-
ing the lost information is difficult.
Another possible solution is to prepare a dataset for dif-
ficult environments [3, 33]. However, these datasets only
cover one or a few difficulties, and creating datasets in
various environments is too expensive. Especially, man-
ual annotation of challenging scenes is difficult and time-
consuming. For example, we can see almost nothing in
usual sRGB images under extremely low-light environ-
ments due to heavy noise. In addition, some regions in HDR
scenes suffer from halation or blocked-up shadows because
the 8-bit range of usual sRGB images cannot fully preserve
the real world, which is 0.000001 [cd/m2]under starlight
and 1.6 billion [cd/m2]under direct sunlight [37]. Heavy
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14007
motion blur and camera shake also make annotation diffi-
cult. Some works capture paired short-exposure and long-
exposure images, and the clean long-exposure images are
used for annotation or ground truth [15–17, 19]. The limi-
tation is that the target scene needs to be motionless if the
pairs are taken sequentially with one camera [15], and posi-
tional calibration is required if the pairs are taken with syn-
chronized cameras [16,17]. Some works use a beam splitter
to capture challenging images and their references without
calibration [19, 45]. However, they are difficult to apply in
dark scenes. Moreover, HDR images cannot be taken in
the same way because some regions become overexposed
or underexposed in both cameras.
To this end, we aim to train image recognition mod-
els that work in various environments only using a train-
ing dataset in simple environments like bright, low dynamic
range, and blurless. In this case, image augmentation or do-
main adaptation is important to overcome the domain gap
between easy training data and difficult test data. However,
we believe usual augmentations on sRGB space are inef-
fective because it does not take into account the nonlinear
mapping of ISPs. In particular, tone mapping significantly
changes the RAW image values, which are roughly propor-
tional to physical brightness [44]. Contrast, brightness, and
hue augmentation on sRGB space result in unrealistic im-
ages that cannot be captured under any ambient light inten-
sity as shown in Fig. 1(a). In contrast, we propose augmen-
tation on RAW images. In other words, augmentation is
applied before ISPs to diminish the domain shift as shown
in Fig. 1(b).
Oher possible sources of the domain gap are differences
in noise amount and noise distribution. To tackle these
problems, we propose a method to align both light inten-
sity and noise domains. Recent works show that adding
physics-based realistic noise improves the performance of
DNN-based denoisers [2, 44, 47, 50] and dark image recog-
nition [4, 15]. Although their proposed sensor noise mod-
elings are accurate, they assume that the original bright im-
ages are noise free. In contrast, we propose to modify the
noise amount after contrast, brightness, and hue conversion
considering the noise amount in the original images. It en-
ables a more accurate alignment of the noise domain. Even
bright images may have dark areas due to shadows or object
colors, and their prior noise cannot be ignored. Another
merit of our method is that it is possible to take dark im-
ages that already contain a lot of noise as input. In addition
to noise alignment after color jitter augmentation, we show
the importance of noise alignment after blur augmentation,
which is proposed for the first time in this paper.
Our contributions are as follows:
• It is the first work to emphasize the importance of aug-
mentation before ISP for image recognition to the best
of our knowledge.• Noise amount alignment method is proposed to reduce
the noise domain gap after RAW image augmentation.
In contrast to previous works, our proposed method
takes into account prior noise in the input image. It en-
ables more accurate alignment and use of any strength
of augmentation and even already noisy input.
• We show qualitative analysis for the validity of
our sensor noise modeling and corresponding noise-
accounted augmentation. We prove that our proposed
noise-accounted RAW augmentation has the edge over
the previous methods.
|
You_Castling-ViT_Compressing_Self-Attention_via_Switching_Towards_Linear-Angular_Attention_at_Vision_CVPR_2023 | Abstract
Vision Transformers (ViTs) have shown impressive per-
formance but still require a high computation cost as com-
pared to convolutional neural networks (CNNs), one rea-
son is that ViTs’ attention measures global similarities and
thus has a quadratic complexity with the number of in-
put tokens. Existing efficient ViTs adopt local attention or
linear attention, which sacrifice ViTs’ capabilities of cap-
turing either global or local context. In this work, we
ask an important research question: Can ViTs learn both
global and local context while being more efficient during
inference? To this end, we propose a framework called
Castling-ViT , which trains ViTs using both linear-angular
attention and masked softmax-based quadratic attention,
but then switches to having only linear-angular attention
during inference. Our Castling-ViT leverages angular ker-
nels to measure the similarities between queries and keys
via spectral angles. And we further simplify it with two tech-
niques: (1) a novel linear-angular attention mechanism:
we decompose the angular kernels into linear terms and
high-order residuals, and only keep the linear terms; and
(2) we adopt two parameterized modules to approximate
high-order residuals: a depthwise convolution and an aux-
iliary masked softmax attention to help learn global and lo-
cal information, where the masks for softmax attention are
regularized to gradually become zeros and thus incur no
overhead during inference. Extensive experiments validate
the effectiveness of our Castling-ViT, e.g., achieving up to a
1.8% higher accuracy or 40% MACs reduction on classifi-
cation and 1.2higher mAP on detection under comparable
FLOPs, as compared to ViTs with vanilla softmax-based at-
tentions. Project page is available at here.
| 1. Introduction
Vision Transformers (ViTs) have made significant
progress in image classification, object detection, and many
*Equal contribution. †Work done while interning at Meta Research.
0.02.55.07.510.012.515.017.5
MACs (G) ×109758085Top-1 Acc. (%)
Image Classification on ImageNet
LeViT
EfficientNet
MobileNetV2
DeiT
RegNetYSwin
CSwin
MViTv2
Autoformer
Castling-ViT(Ours)
1.02.03.04.05.06.0
MACs (G) ×109253035mAP (%)
Object Detection on COCO
YOLOv5
YOLOX
MobileDet-DSP
EfficientDet
FBNetv5
Castling-ViT(Ours)Figure 1. Castling-ViT over SOTA baselines on (1) ImageNet [18]
image classification and (2) COCO [36] object detection.
other applications. It is well recognized that the supe-
rior performance achieved by ViTs is largely attributed to
their self-attention modules that can better capture global
context [20, 57, 64]. Nevertheless, ViTs’ powerful self-
attention module comes at the cost of quadratic complex-
ity with the number of input tokens, causing a major effi-
ciency bottleneck to ViTs’ achievable runtime (i.e., infer-
ence latency) [3, 8, 32, 58, 66, 69, 76]. To mitigate this is-
sue, linear attention designs have been developed to alle-
viate the vanilla ViT attention’s quadratic complexity. In
particular, existing efforts can be categorized into two clus-
ters: (1) ViTs with local attention by restricting the atten-
tion window size [38, 53], sharing the attention queries [2],
or representing the attention queries/keys with low rank ma-
trices [58]; and (2) ViTs with kernel-based linear attention,
which approximate the non-linearity softmax function by
decomposing it into separate kernel embeddings. This en-
ables a change in the matrix computation order for a re-
duced computational complexity [5,6,14,29,37,39,43,66].
Despite their promise in alleviating ViTs’ complexity
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14431
and thus inference runtime, both the local and linear at-
tention compromise ViTs’ performance due to the lack of
capabilities to capture global or local context. To marry the
best of both worlds, we advocate training ViTs with both (1)
efficient but less powerful linear attention, i.e., without the
high-order residuals in angular kernel expansion, and (2)
powerful yet costly softmax-based masked attention. The
latter helps approximate high-order residuals at the early
training stage while being dropped during inference, based
on an assumption that the remaining networks can gradu-
ally learn the high-order components at the later training
stage [67]. This concept resembles the “castling” move in
chess when two pieces are moved at once. While it sounds
promising, there are still two challenges to achieve this.
First , existing linear attention modules still underperform
their vanilla softmax-based counterparts. Therefore, a bet-
ter linear attention is crucial for the final performance. We
find that angular kernels perform equally as softmax-based
attentions in terms of similarity measurements. While they
still suffer from a quadratic complexity, they can be divided
into linear terms and high-order residuals. The challenge is
how to construct ViTs with only the linear terms .Second ,
doing so would require that the trained ViTs merely rely on
the linear terms towards the end of training, which would
call for an approximation of the above high-order residuals.
The challenge is that how we can resort to costly but pow-
erful modules to approximate high-order residuals during
training but does not incur extra inference cost .
In this work, we develop techniques to tackle those chal-
lenges, and make the following contributions:
• We propose a framework called Castling-ViT , which
trains ViTs using both linear-angular attention and
masked softmax-based quadratic attention, but then
switches to having only linear-angular attentions dur-
ing ViT inference to save computational costs.
• We develop a new linear-angular attention leveraging
angular kernels to close the accuracy gap between lin-
ear attention and softmax-based attention. It expands
angular kernels where linear terms are kept while com-
plex high-order residuals are approximated.
• We use two parameterized modules to approximate
the high-order residuals above: a depthwise convolu-
tion and an auxiliary masked softmax-based attention,
where the latter’s attention masks are regularized to
gradually become zeros to avoid inference overhead.
• We conduct extensive experiments to validate the ef-
fectiveness of the proposed Castling-ViT. Results on
classification, detection, and segmentation tasks con-
sistently demonstrate its superior performance ( ↑1.8%
top-1 accuracy or ↑1.2 mAP) or efficiency (40% MACs
savings) over state-of-the-art (SOTA) CNNs and ViTs. |
Zeng_ConZIC_Controllable_Zero-Shot_Image_Captioning_by_Sampling-Based_Polishing_CVPR_2023 | Abstract
Zero-shot capability has been considered as a new rev-
olution of deep learning, letting machines work on tasks
without curated training data. As a good start and the
only existing outcome of zero-shot image captioning (IC),
ZeroCap abandons supervised training and sequentially
searches every word in the caption using the knowledge
of large-scale pre-trained models. Though effective, its
autoregressive generation and gradient-directed searching
mechanism limit the diversity of captions and inference
speed, respectively. Moreover, ZeroCap does not consider
the controllability issue of zero-shot IC. To move forward,
we propose a framework for Controllable Zero-shot IC,
named ConZIC . The core of ConZIC is a novel sampling-
based non-autoregressive language model named Gibbs-
BERT, which can generate and continuously polish every
word. Extensive quantitative and qualitative results demon-
strate the superior performance of our proposed ConZIC
for both zero-shot IC and controllable zero-shot IC. Espe-
cially, ConZIC achieves about 5 ×faster generation speed
than ZeroCap, and about 1.5 ×higher diversity scores, with
accurate generation given different control signals. Our
code is available at https://github.com/joeyz0z/ConZIC.
| 1. Introduction
Image captioning (IC) is a visual-language task, which
targets at automatically describing an image by generating
a coherent sentence. By performing supervised learning on
human-annotated datasets, such as MS-COCO [43], many
methods [22, 33, 49, 50] have achieved impressive evalu-
ation scores on metrics like BLEU [52], METEOR [7],
CIDERr [66], and SPICE [3]. However, these methods still
lag behind human capability of zero-shot IC.
Specifically, those supervised methods extremely rely on
well-designed image-captions pairs. However, it is likely
*Equal contribution. †Corresponding authors
GRIT: A drawing of graffiti on a wall.
ViTCap : A picture of a sheep and a cow.
CLIPCap : A black and white photo of a group of cows.
ZeroCap :Image of a cows drawing.
Ours: Two cows face each other on a pasture
with various flowers in sequence .
GRIT: A woman sitting on a fountain in the sea.
ViTCap :A painting of a woman laying on a bed.
CLIPCap: A painting of a woman in on a surfboard.
ZeroCap :Image of a girl sleeping in the sea .
Ours: A painting of the princess submerged in
delicate poses with water background .
Positive:
1.A
very cute cheerful white bird accompanies a happy tiny elephant.
2.A
cute little white duck enjoys amazed chatting with an elephant.
3.A
white henand an extremely small beautiful elephant play happily on ground.
4.A h
ealthy elephant enthusiastically admires anawesome adorable southern bird.
5.A g
orgeous elephant walks beside a white goose with miniature smiling .
N
egative:
1.A b
adly elephant stars at a scared small bird.
2. A scared little bird is afraid that the vicious elephant would eat it.
3. Image of a sad libelous chicken moping alongside a small lonely elephant is shown.
4.A stra
y worried white duck meets a stealthy hungry elephant.
5.A b
rown solitary elephant roams with an lonely white sparrow nearby.+
Positive
Negativeor(a) Examples of zero-shot image captioning.
GRIT: A drawing of graffiti on a wall.
ViTCap : A picture of a sheep and a cow.
CLIPCap : A black and white photo of a group of cows.
ZeroCap :Image of a cows drawing.
Ours: Two cowsface each other on a pasture
with various flowers in sequence .
GRIT: A woman sitting on a fountain in the sea.
ViTCap :A painting of a woman laying on a bed.
CLIPCap: A painting of a woman in on a surfboard.
ZeroCap :Image of a girl sleeping in the sea.
Ours: A painting of the princess submerged in
delicate poses with water background .
Positive:
1. A very cute cheerful white bird accompanies a happy tiny elephant.
2. A cute little white duck enjoys amazed chatting with an elephant.
3. A white henand an extremely small beautiful elephant play happily on ground.
4. A healthy elephant enthusiastically admires anawesome adorable southern bird.
5. A gorgeous elephant walks beside a white goose with miniature smiling .
Negative:1. A badly elephant stars at a scared small bird.
2. A scared little bird is afraid that the vicious elephant would eat it.
3. Image of a sad libelous chicken moping alongside a small lonely elephant is shown.
4. A stray worried white duck meets a stealthy hungry elephant.
5. A brown solitary elephant roams with an lonely white sparrow nearby.+
Positive
Negativeor
(b) Diversity of ConZIC.
Figure 1. The highlights of our proposed method. (a) shows two
examples of zero-shot image captioning on several SOTA meth-
ods. Specifically, GRIT [50] and ViTCAP [22] are two supervised
methods without pre-trained models. ClipCap [49] is a super-
vised method using pre-trained CLIP. GRIT, ViTCAP, and CLIP-
Cap are firstly trained on MSCOCO and then do testing. ZeroCap
[65] is the zero-shot method without any training. (b) shows the
diversity of our proposed ConZIC, which manifests two aspects:
semantic (diverse words: different colors denoting different parts-
of-speech) and syntactic (diverse sentence patterns).
impossible to construct a large enough dataset, including
paired images and high-quality captions covering various
styles/contents. As a result, it is challenging for the machine
to caption images that are outliers with respect to the train-
ing distribution, which is common in real applications (see
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23465
examples in Fig. 1a). On the contrary, humans can perform
IC without any specific training, i.e., realizing zero-shot IC.
Because humans can integrate what they see, i.e., the image,
and what they know, i.e., the knowledge.
Recently, large-scale pretraining models have shown a
strong capability of learning knowledge from super-large-
scale data, showing great potential in various downstream
tasks [10,27,54,57,63]. Equipped with the visual-language
knowledge learned by CLIP [57] and linguistic knowledge
from GPT-2 [58], ZeroCap [65] is the first and the only zero-
shot IC method, which proposes a searching-based strategy
and is free of training on extra supervised data. Specifically,
ZeroCap searches the caption words one by one and from
left to right, guided by CLIP-induced score for image-text
matching and GPT-2 word distribution for caption fluency.
ZeroCap is a good start and inspires us to explore how to
search for the optimal caption in a better way.
i) More flexible. ZeroCap utilizes GPT-2 to perform left-
to-right autoregressive generation. Once a word is fixed,
there is no chance to modify it when we move to the next
position. In other words, such generation order is not flexi-
ble enough to consider the full context information.
ii) More efficient. The searching at every position is real-
ized by iteratively updating the parameters of GPT-2, which
is time-consuming, as shown in Fig. 3c.
iii) More diverse. IC is an open problem. Given an
image, different persons may have different visual atten-
tions [14] and language describing styles [24, 47, 73], thus
resulting in diverse descriptions. ZeroCap employs beam
search to generate several candidate sentences, which, how-
ever, have similar syntactic patterns (see Appendix D).
iv) More controllable. To endow captioning models with
human-like controllability, e.g., sentiment, personality, a re-
cent surge of efforts [12,19,24,47] resort to introducing ex-
tra control signals as constraints of the generated captions,
called Controllable IC. However, controllable zero-shot IC
has not been explored yet.
Bearing all these four-aspect concerns in mind, we
propose a novel framework for controllable zero-shot IC,
named ConZIC, as shown in Fig. 2. Specifically, after ana-
lyzing the relationship between Gibbs sampling and masked
language models (MLMs, currently we use BERT) [11, 20,
70], we firstly develop a new language model (LM) called
Gibbs-BERT to realize the zero-shot IC by sampling-based
search. Compared with autoregressive models, Gibbs-
BERT has more a flexible generation order, bringing the
self-correct capability by bidirectional attention with faster
and more diverse generations. After integrating Gibbs-
BERT with the CLIP that is used to evaluate the similar-
ity between image and text, our proposed framework can
perform zero-shot IC. By further introducing a task-specific
discriminator for control signal into our framework, our
proposed framework can perform controllable zero-shot IC.The main contributions of this paper are:
• We propose to solve the controllable zero-shot IC task in
a polishing way. By combining Gibbs sampling with a
MLM, we can randomly initialize the caption and then
polish every word based on the full context (bidirectional
information) in the caption.
• ConZIC is free of parameter updates, achieving about 5 ×
faster generation speed than the SOTA method, ZeroCap.
• Equipped with Gibbs-BERT, ConZIC can perform flexi-
ble searching, thus generating sentences with higher di-
versity, as shown in Table. 1.
• To the best of our knowledge, ConZIC is the first control-
lable zero-shot IC method. Four classes of controllable
signals, including length, infilling, styles, and parts-of-
speech, are evaluated in our experiments.
|
Zhang_Revisiting_Rotation_Averaging_Uncertainties_and_Robust_Losses_CVPR_2023 | Abstract
In this paper, we revisit the rotation averaging problem
applied in global Structure-from-Motion pipelines. We ar-
gue that the main problem of current methods is the mini-
mized cost function that is only weakly connected with the
input data via the estimated epipolar geometries. We pro-
pose to better model the underlying noise distributions by
directly propagating the uncertainty from the point corre-
spondences into the rotation averaging. Such uncertain-
ties are obtained for free by considering the Jacobians
of two-view refinements. Moreover, we explore integrat-
ing a variant of the MAGSAC loss into the rotation av-
eraging problem, instead of using classical robust losses
employed in current frameworks. The proposed method
leads to results superior to baselines, in terms of accu-
racy, on large-scale public benchmarks. The code is public.
https://github.com/zhangganlin/GlobalSfMpy
| 1. Introduction
Building large 3D reconstructions from unordered im-
age collections is an essential component in any system that
relies on crowd-sourced mapping. The current paradigm
is to perform this reconstruction via Structure-from-Motion
[29] which jointly estimates the camera parameters and the
scene geometry represented with a 3D point cloud. Meth-ods for Structure-from-Motion can generally be categorized
into two classes; Incremental methods [29, 32, 33, 38] that
sequentially grows a seed reconstruction by alternating tri-
angulation and registering new images, and Global meth-
ods[8, 22, 24, 27] which first estimate pairwise geometries
and then aggregate them in a bottom-up approach. Histor-
ically, incremental methods are more robust and accurate,
but the need for frequent bundle adjustment [35] comes with
significant computational cost which limits their scalabil-
ity. In contrast, global (or non-sequential) methods require
much lower computational effort and can in principle scale
to larger image collections. However, in practice, current
methods are held back by the lack of accuracy and have not
enjoyed the same level of success as incremental methods.
Global methods work by first estimating a set of pair-
wise epipolar geometries between co-visible images. Next,
viarotation averaging , a set of globally consistent rotations
are estimated by ensuring they agree with the pairwise rel-
ative rotations. Once the rotations are known, the camera
positions and 3D structure are estimated, and refined jointly
in a single final bundle adjustment.
Rotation averaging has a long history in computer vision
(seee.g. [16,22] for early works) and is a well-studied prob-
lem. Most methods formulate it as an optimization problem,
finding the rotation assignment that minimizes some energy.
A common choice is the chordal distance , measuring the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17215
discrepancy in the rotation matrices in the L2-sense
min
{Ri}N
i=1NX
i=1X
j∈N(i)∥ˆRijRi−Rj∥2
F (1)
where ˆRijis the relative rotation estimated between image
iandj. There are also other choices such as using angle-
axis [34] or quaternion [16] as rotation representation, or
optimising over a Lie algebra [17], however the overall idea
(measuring some consistency with the relative estimates)
remains the same. Many works have focused on the opti-
mization problem itself, both theoretically [11, 36] and by
providing new algorithms [9], but did not consider whether
the cost itself is suitable for the task. In (1), each relative ro-
tation measurement is given the same weight. However in
practice, the quality of the epipolar geometries varies sig-
nificantly. Figure 1 shows two images with wildly different
uncertainties (and errors) in the rotation estimate. To ad-
dress this problem, there is a line of work [6, 18, 31] which
augment the cost in (1) with robust loss functions that give
a lower weight to large residuals. However, the same loss
function is generally applied to each residual, independent
of the measurement uncertainty.
In this paper we revisit the rotation averaging problem.
We argue that the main problem in current methods is that
the cost functions that are minimized are only weakly con-
nected with the input data via the estimated epipolar ge-
ometries. We propose to better model the underlying noise
distributions (coming from the keypoint detection noise and
spatial distribution) by directly propagating the uncertainty
from the point correspondences into the rotation averaging
problem, as shown in Figure 2. While the idea itself is sim-
ple, we show that this allows us to get significantly more
accurate estimates of the absolute rotations; reducing the
gap between incremental and global methods. Note that the
uncertainties we leverage are essentially obtained for free
by considering the Jacobians of the two-view refinement.
As a second contribution, we explore integrating a vari-
ant of the MAGSAC [3] loss into the rotation averaging
problem, instead of using the classical robust losses em-
ployed in current frameworks. MAGSAC [3] was originally
proposed as a threshold-free estimator for two-view epipo-
lar geometry, where the idea is to marginalize over an in-
terval of acceptable thresholds, i.e., noise range. We show
that this fits well into the context of rotation-averaging, as it
is not obvious how to set the threshold for deciding on in-
lier/outlier relative rotation measurements, especially in the
uncertainty-reweighted cost that we propose.
|
Yang_Learning_Event_Guided_High_Dynamic_Range_Video_Reconstruction_CVPR_2023 | Abstract
Limited by the trade-off between frame rate and exposure
time when capturing moving scenes with conventional cam-
eras, frame based HDR video reconstruction suffers fromscene-dependent exposure ratio balancing and ghosting ar-tifacts. Event cameras provide an alternative visual repre-sentation with a much higher dynamic range and temporalresolution free from the above issues, which could be an
effective guidance for HDR imaging from LDR videos. Inthis paper , we propose a multimodal learning framework
for event guided HDR video reconstruction. In order tobetter leverage the knowledge of the same scene from thetwo modalities of visual signals, a multimodal representa-tion alignment strategy to learn a shared latent space and a
fusion module tailored to complementing two types of sig-
nals for different dynamic ranges in different regions areproposed. Temporal correlations are utilized recurrentlyto suppress the flickering effects in the reconstructed HDRvideo. The proposed HDRev-Net demonstrates state-of-the-
art performance quantitatively and qualitatively for both
synthetic and real-world data. | 1. Introduction
The dynamic range of the real world usually exceeds
what a conventional camera and 8-bit image can record by a
large margin. High dynamic range (HDR) imaging, whichexpands the luminance range limited by low dynamic range(LDR) images or videos, is a broadly used technique with
extensive applications in photography/videography, video
games, and high-end display.
Most HDR imaging methods for conventional cameras
rely on capturing and merging multiple snapshots with dif-ferent exposure times [ 9,49], which is challenging for cap-
turing videos. There have been enduring efforts for sophisti-cated modification on conventional frame based cameras tocapture multi-exposure sequences (nearly) simultaneously,e.g., beam splitting with three or more sensors [ 69,70], tem-
porally [ 7,30,32] or spatially [ 1,8,22,28,53,54] varying
exposure. Nevertheless, their abilities for HDR video re-construction are limited by the trade-off between a higherframe rate (for a smooth viewing experience) and a higherdynamic range (for capturing details in dark regions with
∗Corresponding author
Project page: https://yixinyang-00.github.io/HDRev/
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13924
prolonged exposure time). Moreover, the optimal expo-
sure ratio between LDR frame sequences with different ex-posure settings is scene-dependent and temporally-varying,whose balancing is difficult for diverse scenes captured invideos. Even worse, moving objects or camera shakingduring video capture can lead to ghosting effects in framesgenerated by long exposure shots. An HDR video couldalso be hallucinated from LDR inputs in a frame-by-framemanner by leveraging prior knowledge of tone-mapping op-erators [ 61] or data modeling powers of deep learning [ 11].
However, due to the highly ill-posed nature of the hallucina-tion process, it inevitably leads to severe flickering effects.
In recent years, the event camera [ 16] has drawn increas-
ing attention of researchers, due to its advantages over con-ventional frame based ones in sensing fast motions and ex-tended dynamic ranges ( e.g.,120 dB for DA VIS346). Unlike
using multi-exposure frames, events recorded along with an
LDR video encode HDR irradiance changes without sacri-ficing the frame rate/exposure time of the LDR video, whichavoids ghosting artifacts as well, and is very promising asguidance for HDR video reconstruction.
However, integrating events with LDR video for HDR
video reconstruction is challenging due to inconsistency be-tween events and frames in three aspects: 1) Modality mis-
alignment : Frames and events are completely different rep-
resentations of visual information , and “fusing” them by
first translating events into intensity values [ 60,80] like [ 24]
often includes artifacts from solving the ill-posed event in-tegration problem. 2) Dynamic range gap : Performing im-
age/video reconstruction under the guidance of events [ 75],
i.e., doubly integrating events as intensity changes within
the exposure time [ 56,57], ignores the dynamic range clip-
ping in the capturing process of LDR frames, which leads
touncertainties in under/over-exposed regions .3 ) Tex-
ture mismatching : Regions with smooth textures and slow
motion hardly produce effective event observations, whichresults in inconsistent textures among consecutive eventstacks and flickering effects in the reconstructed videos.
We propose HDRev-Net , a multimodal learning frame-
work for event guided HDR video reconstruction to tacklethe challenges by the following strategies: 1) To achievemultimodal representation alignment for the two modalities
of the same scene, we propose a learning strategy to pro-gressively project them onto a shared representation space .
2) To reliably complement information from the two modal-
ities in over/under-exposed regions, the representations pro-duced by the two modality-specific encoders are fused foran expressive joint representation using a confidence guided
multimodal fusion module . 3) To effectively suppress the
flickering effects , we utilize the temporal redundant infor-
mation between consecutive frames and events via the pro-posed recurrent convolutional encoders .
As shown in Fig. 1, HDRev-Net can successfully fuseLDR frames and events to obtain HDR frames with more
details and less flickering effects. It demonstrates state-of-the-art HDR video reconstruction performance on both syn-thetic and real data by making the following contributions:
• We design a multimodal alignment strategy to bridge
the gap between events and frames by aligning theirrepresentation in a shared latent space.
• We develop a confidence guided fusion module to
complement HDR information from events and finerdetails from well-exposed regions in LDR frames.
• We utilize the temporal correlation from consecutive
events and LDR frames in a recurrent fashion to alle-viate the flickering effects for recovered HDR videos.
|
Yang_Diffusion_Probabilistic_Model_Made_Slim_CVPR_2023 | Abstract
Despite the recent visually-pleasing results achieved, the
massive computational cost has been a long-standing flaw
for diffusion probabilistic models (DPMs), which, in turn,
greatly limits their applications on resource-limited plat-
forms. Prior methods towards efficient DPM, however, have
largely focused on accelerating the testing yet overlooked
their huge complexity and sizes. In this paper, we make
a dedicated attempt to lighten DPM while striving to pre-
serve its favourable performance. We start by training a
small-sized latent diffusion model (LDM) from scratch, but
observe a significant fidelity drop in the synthetic images.
Through a thorough assessment, we find that DPM is in-
trinsically biased against high-frequency generation, and
learns to recover different frequency components at differ-
ent time-steps. These properties make compact networks
unable to represent frequency dynamics with accurate high-
frequency estimation. Towards this end, we introduce a
customized design for slim DPM, which we term as Spec-
tral Diffusion ( SD), for light-weight image synthesis. SD
incorporates wavelet gating in its architecture to enable
frequency dynamic feature extraction at every reverse step,
and conducts spectrum-aware distillation to promote high-
frequency recovery by inverse weighting the objective based
on spectrum magnitude. Experimental results demonstrate
that,SDachieves 8-18 ×computational complexity reduc-
tion as compared to the latent diffusion models on a series of
conditional and unconditional image generation tasks while
retaining competitive image fidelity.
| 1. Introduction
Diffusion Probabilistic Models (DPMs) [18,57,59] have
recently emerged as a powerful tool for generative mod-
eling, and have demonstrated impressive results in image
synthesis [8, 45, 48], video generation [17, 20, 77] and 3D
editing [43]. Nevertheless, the gratifying results come with
a price: DPMs suffer from massive model sizes. In fact,
*Corresponding author
DPMLiteDPM*274.1M#Param96.1GMACs22.4M#Param7.9GMACsSpectralDPM (Ours)21.1M#Param6.7G MACs
Smaller #Params(M)BetterFID
①
②(Ours)Figure 1. (1) Visualization of the frequency gap among generated
images with the DPM [48], Lite DPM and our SDon FFHQ [27]
dataset. Lite-DPM is unable to recover fine-grained textures, while
SDcan produce realistic patterns. (2) Model size, Multiply-Add
cumulation (MACs) and FID score on ImageNet [7]. Our model
achieves compelling visual quality with minimal computational
cost.∗indicates our re-implemented version.
state-of-the-art DPMs requires billions of parameters, with
hundreds or even thousands of inference steps per image.
For example, DALL·E 2[45], which is composed of 4 sep-
arate diffusion models, requires 5.5B parameters and 356
sampling steps in total. such an enormous model size, in
turn, makes DPMs extremely cumbersome to be employed
in resource-limited platforms.
However, existing efforts towards efficient DPMs have
focused on model acceleration, but largely overlooked light-
ening of the model. For example, the approaches of [1,
32, 37, 38, 40, 52, 56] strive for faster sampling, while those
of [13, 19, 48, 62] rely on reducing the input size. Admit-
tedly, all of these methods give rise to shortened training or
inference time, yet still, the large sizes prevent them from
many real-world application scenarios.
In this paper, we make a dedicated efforts towards build-
ing compact DPMs. To start with, we train a lite version
of the popular latent diffusion model (LDM) [48] by re-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22552
ducing the channel size. We show the image generated by
the original and and lite DPM in Figure 1. While the lite
LDM sketches the overall structure of the faces, the high-
frequency components, such as the skin and hair textures,
are unfortunately poorly recovered. This phenomenon can
be in fact revealed by the Discrete Fourier Transform (DFT)
coefficient shown on the right column, indicating that the
conventional design for DPMs leads to high-frequency de-
ficiency when the model is made slim.
We then take an in-depth analysis on the DPMs through
the lens of frequency, which results in two key obser-
vations. (1) Frequency Evolution. Under mild assump-
tions, we mathematically prove that DPMs learn different
functionalities at different stages of the denoising process.
Specifically, we show that the optimal denoiser in fact boils
down to a cascade of wiener filters [66] with growing band-
widths. After recovering the low-frequency components,
high-frequency features are added gradually in the later de-
noising stages. This evolution property, as a consequence,
small DPMs fails to learn dynamic bandwidths with limited
parameters. (2) Frequency Bias. DPM is biased towards
dominant frequency components of the data distribution. It
is most obvious when the noise amplitude is small, lead-
ing to inaccurate noise prediction at the end of the reverse
process. As such, small DPMs struggle to recover the high-
frequency band and image details.
Motivated by these observations, we propose a novel
Spectral Diffusion ( SD) model, tailored for light-weight im-
age synthesis. Our core idea is to introduce the frequency
dynamics and priors into the architecture design and train-
ing objective of the small DPM, so as to explicitly preserve
the high-frequency details. The proposed solution consists
of two parts, each accounting for one aforementioned obser-
vations. For the frequency evolution, we propose a wavelet
gating operation, which enables the network to dynamically
adapt to the spectrum response at different time-steps. In
the upsample and downsample stage, the input feature is
first decomposed through wavelet transforms and the coef-
ficients are re-weighted through a learnable gating function.
It significantly lowers the parameter requirements to repre-
sent the frequency evolution in the reverse process.
To compensate for the frequency bias for small DPMs,
we distill high-frequency knowledge from a teacher DPM
to a compact network. This is achieved by inversely weight-
ing the distillation loss based on the magnitudes of the fre-
quency spectrum. In particular, we give more weight to the
frequency bands with small magnitudes, which strength-
ens the recovery of high-frequency details for the student
model. By integrating both designs seamlessly, we build a
slim latent diffusion model, called SD, which largely pre-
serves the performance of LDM. Notably, SDinherits the
advantages of DPMs, including superior sample diversity,
training stability, and tractable parameterization. As shownin Figure 1, our model is 8∼18×times smaller and runs
2∼5×times faster than the original LDM, while achieving
competitive image fidelity.
The contributions of this study are threefold:
1. This study investigates the task of diffusion model
slimming, which remains largely unexplored before.
2. We identify that the key challenge lies in its unrealistic
recovery for the high-frequency components. By prob-
ing DPMs from a frequency perspective, we show that
there exists a spectrum evolution over different denois-
ing steps, and the rare frequencies cannot be accurately
estimated by small models.
3. We propose SD, a slim DPM that effectively restores
imagery textures by enhancing high-frequency genera-
tion performance. SDachieves gratifying performance
on image generation tasks at a low cost.
|
Zhang_Modeling_Video_As_Stochastic_Processes_for_Fine-Grained_Video_Representation_Learning_CVPR_2023 | Abstract
A meaningful video is semantically coherent and
changes smoothly. However, most existingne-grained
video representation learning methods learn frame-wise
features by aligning frames across videos or exploring rel-
evance between multiple views, neglecting the inherent dy-
namic process of each video. In this paper, we propose to
learn video representations by modeling Video as Stochas-
tic Processes (VSP) via a novel process-based contrastive
learning framework, which aims to discriminate between
video processes and simultaneously capture the temporal
dynamics in the processes. Specically, we enforce the em-
beddings of the frame sequence of interest to approximate
a goal-oriented stochastic process, i.e., Brownian bridge,
in the latent space via a process-based contrastive loss. To
construct the Brownian bridge, we adapt specialized sam-
pling strategies under different annotations for both self-
supervised and weakly-supervised learning. Experimental
results on four datasets show that VSP stands as a state-of-
the-art method for various video understanding tasks, in-
cluding phase progression, phase classication, and frame
retrieval. Code is available at https://github.com/
hengRUC/VSP .
| 1. Introduction
Fine-grained video representation learning [ 11] is one of
the fundamental problems in computer vision, which has
great practical value in various real-world applications such
as action phase classication [ 11,40], phase boundary de-
tection [ 26], and video object segmentation [ 7,9,22,33].
The way to model videos, especially the temporal dynam-
ics, is the core problem of video representation learning
and is highly relevant to available data annotations. Pioneer
*Equal contributions.
†Corresponding author.works [ 4,29] directly model video as 3D data where tempo-
ral is one dimension, and they require large-scale human-
generated annotations for representation learning. How-
ever, it is labor-intensive and time-consuming to collect
those annotations. Besides, human-generated annotations
hinder domain generalization to multiple downstream tasks.
To alleviate the requirement on labeled data, some re-
cent works [ 11–13] model the video alignment (Figure 1(a))
across the temporal dimensions by the cycle-consistency
loss [ 11] or temporal alignment loss [ 13]. Their basic as-
sumption is that two videos of the same action can be
aligned over temporal ordering in the embedding space, and
the latent correspondences across sequence pairs can be re-
garded as a supervisory signal. However, these methods es-
sentially work in a weakly-supervised manner that requires
video-level annotations to construct video pairs, impeding
their application in the real-world scene where the semantic
labels are absent.
As an alternative, self-supervised video representation
learning [ 5,26] explores the view relevance (Figure 1(b))
between two augmented views of one video. By model-
ing video as a sequence along the temporal dimensions,
they elaborately construct two views through a series of
spatio-temporal data augmentations. The training objec-
tive is to encourage the relevance of two augmented views
to conform to their assumptions,e.g., spatio-temporal con-
trast [ 26] or similarity distribution [ 5]. However, those
methods are sensitive to complex hand-craft view augmen-
tation thus suffering from sub-optimal performance.
As crucial and intrinsic cues, the dynamics of videos
impose temporal correlations among successive frames.
Therefore, the evolution process of the correspondingne-
grained representations should follow coherent constraints,
which can be modeled as a stochastic process. To this
end, we propose a new perspective that considers Video as
Stochastic Processes (VSP) to explicitly capture the tem-
poral dynamics of videos by exploring process agreement
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2225
Figure 1. The evolution ofne-grained video representation learning.(a)Video alignment(e.g., TCC [ 11], LA V [ 13]) enforces two videos
from the same action aligned temporally.(b)View relevance(e.g., TCN [ 26], CARL [ 5]) enforces the relevance of two augmented views
conform to specic assumptions.(c)The proposedprocess agreementmodels video as stochastic process and enforces an arbitrary frame
to agree with a time-variant Gaussian distribution conditioned on the start and end frames.
(Figure 1(c)). The basic assumption is that a video phase
is coherent and smoothly changes from the start to the end,
which is essentially a goal-oriented stochastic process that
neighboring points are similar to each other and their co-
herent changes abide by the direction of the endpoint. For
example, a baseball pitching video demonstrates a series
of continuous movements as the ballies out of the hand.
Specically, we model a video phase as a goal-oriented
stochastic process,i.e., the Brownian bridge [ 3,25], where
the frame representations in the latent embedding space are
expected to be smooth temporal dynamics conditioned on
thexed start and end frames. With this intuitive assump-
tion, an arbitrary frame is enforced to be like a noisy lin-
ear interpolation between the start and end frames with un-
certainty in a latent space,i.e., agree with a time-variant
Gaussian distribution. By modeling video as stochastic pro-
cesses, the proposed method captures the dynamics of each
action and establishes dependencies between video frames
as well as the semantic consistency of the whole video.
Compared with video alignment which assumes pairing
videos can be temporally aligned or view relevance which
assumes two augmented views are relevant, VSP only re-
quests process agreement that assumes the internal frames
agree with the start and end frames, discarding the expen-
sive annotated video pairs or hand-crafted view pairs.
The implementation of VSP follows a process-based
contrastive learning framework where each sample is a
frame triplet (start, internal, end). The start and end frames
of each sample are identied as the beginning and end of the
Brownian Bridge. The positive samples are the frame inside
the Brownian bridge while the negatives are outside ones.
The training objective is to enforce the positive samples
conform to the distribution of the target Brownian bridges
process while the negative samples stay away from it. Ben-
eting from the tunability of the start and end points of the
Brownian bridge, VSP is versatile for various annotationsituations. For the most generic situations where human an-
notations are not accessible, VSP works in a self-supervised
manner by randomly sampling the triplets with an empirical
length as Brownian bridges. With the phase-level annota-
tions, VSP gains more powerful representations by taking
each phase as a Brownian bridge in a weakly-supervised
manner. As for the frame-level annotations, the proposed
process-based contrastive objective serves as the regulariza-
tion term of conventional contrastive losses.
The main contributions are summarized as follows:
• We propose a novelne-grained video representation
learning framework that models Video as Stochastic
Processes (VSP) by enforcing frame sequences to con-
form to Brownian bridge distributions via a process-
based contrastive loss.
• We adopt specialized sampling strategies for differ-
ent types of annotated data by adjusting the Brownian
bridge and therefore acquire favorable video represen-
tations in both self-supervised and weakly-supervised
manners.
• To the best of our knowledge, we are therst to model
video as a stochastic process and achieve state-of-the-
art performance on variousne-grained video under-
standing tasks across four widely-used datasets.
|
Yan_PlenVDB_Memory_Efficient_VDB-Based_Radiance_Fields_for_Fast_Training_and_CVPR_2023 | Abstract
In this paper, we present a new representation for neural
radiance fields that accelerates both the training and the
inference processes with VDB, a hierarchical data struc-
ture for sparse volumes. VDB takes both the advantages
of sparse and dense volumes for compact data representa-
tion and efficient data access, being a promising data struc-
ture for NeRF data interpolation and ray marching. Our
method, Plenoptic VDB (PlenVDB), directly learns the VDB
data structure from a set of posed images by means of a
novel training strategy and then uses it for real-time ren-
dering. Experimental results demonstrate the effectiveness
and the efficiency of our method over previous arts: First, it
converges faster in the training process. Second, it delivers
a more compact data format for NeRF data presentation.
Finally, it renders more efficiently on commodity graphics
hardware. Our mobile PlenVDB demo achieves 30+ FPS,
1280×720 resolution on an iPhone12 mobile phone. Check
plenvdb.github.io for details.
*Work done while the author was an intern at ByteDance.
†Corresponding authors. | 1. Introduction
With the recent advancement of Neural Radiance Fields
(NeRF) [17], high-quality Novel View Synthesis from a
sparse set of input images can be achieved. It has many ap-
plications in multimedia, AR/VR, gaming, etc. On the other
hand, new content creation paradigms have been proposed
based on NeRF, such as Dreamfusion [25], which enable
the possibility of general text-to-3D synthesis.
Despite the promising results, one shortage of NeRF is
the expensive computation of training and rendering, which
prohibits real-time applications and effective scene creation.
There have been many efforts to accelerate NeRF rendering
by pre-computing and storing the results or intermediate re-
sults into a 3D grid. Thus, the computation cost for ren-
dering will be reduced by several orders of magnitude. Al-
though the methods that exploit 3D dense grid [7,27,30] can
achieve real-time rendering and fast training, they usually
introduce more storage overhead, which limits the applica-
tion on mobile devices. On the other hand, for the methods
that utilize 3D sparsity [4, 5, 9, 10, 33], real-time rendering
and small storage overhead can be achieved, but the training
time is usually getting worse since many of them will first
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
88
train a Vanilla NeRF or a dense grid, and then convert it to
the sparse representation.
In this paper, we propose an efficient sparse neural vol-
ume representation, which we call Plenoptic VDB (Plen-
VDB). The VDB [19] is an industry-proven efficient hierar-
chical data structure being used in high-performance anima-
tion and simulation for years. We adopt its design principle
and use VDB to represent NeRF. VDB takes both advan-
tages of sparse and dense volumes for compact data repre-
sentation and efficient data access, being a promising data
structure for NeRF data interpolation and ray casting. In
addition, we propose a novel training approach to directly
learn the VDB data without additional conversion steps, so
that our model is neat and compact. We show that our model
represents high-resolution details of a scene with a lower
volume size for fast training and rendering over the state
of the arts. Moreover, the trained VDB model can be ex-
ported into the NanoVDB [20] format and be used in graph-
ics shaders, such as the GLSL fragment shader, that enables
rendering a NeRF model on mobile devices in real-time.
In our experiment, the mobile PlenVDB achieves 30+ FPS,
1280×720 resolution on an iPhone12 mobile phone.
In summary, our approach has two main contributions:
• We first use VDB as the sparse volume data structure
for NeRF acceleration, and achieve fast rendering even
on mobile devices.
• We propose a strategy that learns the VDB directly and
achieves fast training and occupies small storage.
|
Zemni_OCTET_Object-Aware_Counterfactual_Explanations_CVPR_2023 | Abstract
Nowadays, deep vision models are being widely de-
ployed in safety-critical applications, e.g., autonomous
driving, and explainability of such models is becoming a
pressing concern. Among explanation methods, counter-
factual explanations aim to find minimal andinterpretable
changes to the input image that would also change the
output of the model to be explained. Such explanations
point end-users at the main factors that impact the deci-
sion of the model. However, previous methods struggle to
explain decision models trained on images with many ob-
jects, e.g., urban scenes, which are more difficult to work
with but also arguably more critical to explain. In this
work, we propose to tackle this issue with an object-centric
framework for counterfactual explanation generation. Our
method, inspired by recent generative modeling works, en-
codes the query image into a latent space that is structured
in a way to ease object-level manipulations. Doing so, it
provides the end-user with control over which search di-
rections (e.g., spatial displacement of objects, style mod-
ification, etc.) are to be explored during the counterfac-
tual generation. We conduct a set of experiments on coun-
terfactual explanation benchmarks for driving scenes, and
we show that our method can be adapted beyond classifi-
cation, e.g., to explain semantic segmentation models. To
complete our analysis, we design and run a user study
that measures the usefulness of counterfactual explanations
in understanding a decision model. Code is available at
https://github.com/valeoai/OCTET .
| 1. Introduction
Deep learning models are now being widely deployed,
notably in safety-critical applications such as autonomous
driving. In such contexts, their black-box nature is a major
concern, and explainability methods have been developed to
improve their trustworthiness. Among them, counterfactual
explanations have recently emerged to provide insights into
a model’s decision [7, 52, 55]. Given a decision model and
an input query, a counterfactual explanation is a data point
that differs minimally butmeaningfully from the query in
Query image
Left Left
Left Left Counterfactual Explanations w/ OCTET
targeting the r oad targeting the yellow carFigure 1. Counterfactual explanations generated by OCTET.
Given a classifier that predicts whether or not it is possible to go
left, and a query image ( top left ), OCTET produces a counterfac-
tual explanation where the most influential features that led to the
decision are changed ( top right ). On the bottom row, we show that
OCTET can also operate under different settings that result in dif-
ferent focused explanations. We report the prediction made by the
decision model at the top left of each image.
a way that changes the output decision of the model. By
looking at the differences between the query and the expla-
nation, a user is able to infer — by contrast — which ele-
ments were essential for the model to come to its decision.
However, most counterfactual methods have only shown re-
sults for explaining classifiers trained on single-object im-
ages such as face portraits [4, 28, 29, 42, 46]. Aside from
the technical difficulties of scaling up the resolution of the
images, explaining decision models trained on scenes com-
posed of many objects also present the challenge that those
decisions are often multi-factorial. In autonomous driving,
for example, most decisions have to take into account the
position of all other road users, as well as the layout of the
road and its markings, the traffic light and signs, the overall
visibility, and many other factors.
In this paper, we present a new framework, dubbed
OCTET for Object-aware CounTerfactual ExplanaTions, to
generate counterfactual examples for autonomous driving.
We leverage recent advances in unsupervised compositional
generative modeling [14] to provide a flexible explanation
method. Exploiting such a model as our backbone, we can
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15062
assess the contribution of each object of the scene indepen-
dently and look for explanations in relation to their posi-
tions, their styles, or combinations of both.
To validate our claims, extensive experiments are con-
ducted for self-driving action decision models trained with
the BDD100k dataset [60] and its BDD-OIA extension [59].
Fig. 1 shows counterfactual explanations found by OCTET:
given a query image and a visual decision system trained to
assess if the ego vehicle is allowed to go left or not, OCTET
proposes a counterfactual example with cars parked on the
left side that are moved closer. When inspecting specific
items (bottom row of the figure), OCTET finds that moving
the yellow car to the left, or adding a double line marking
on the road, are ways to change the model’s decision. These
explanations highlight that the absence of such elements on
the left side of the road heavily influenced the model.
To sum up, our contributions are as follows:
• We tackle the problem of building counterfactual expla-
nations for visual decision models operating on com-
plex compositional scenes. We specifically target au-
tonomous driving visual scenarios.
• Our method is also a tool to investigate the role of spe-
cific objects in a model’s decision, empowering the user
with control over which type of explanation to look for.
• We thoroughly evaluate the realism of our counter-
factual images, the minimality and meaningfulness of
changes, and compare against previous reference strate-
gies. Beyond explaining classifiers, we also demon-
strate the versatility of our method by addressing ex-
planations for a segmentation network.
• Finally, we conduct a user-centered study to assess the
usefulness of our explanations in a practical case. As
standard evaluation benchmarks for counterfactual ex-
planations are lacking a concrete way to measure the
interpretability of the explanations, our user-centered
study is a key element to validate the presented pipeline.
|
Zhang_PRISE_Demystifying_Deep_Lucas-Kanade_With_Strongly_Star-Convex_Constraints_for_Multimodel_CVPR_2023 | Abstract
The Lucas-Kanade (LK) method is a classic iterative ho-
mography estimation algorithm for image alignment, but
often suffers from poor local optimality especially when im-
age pairs have large distortions. To address this challenge,
in this paper we propose a novel DeepStar-Convexif ied
Luca s-Kanad e(PRISE) method for multimodel image align-
ment by introducing strongly star-convex constraints into
the optimization problem. Our basic idea is to enforce the
neural network to approximately learn a star-convex loss
landscape around the ground truth give any data to facili-
tate the convergence of the LK method to the ground truth
through the high dimensional space defined by the network.
This leads to a minimax learning problem, with contrastive
(hinge) losses due to the definition of strong star-convexity
that are appended to the original loss for training. We also
provide an efficient sampling based algorithm to leverage the
training cost, as well as some analysis on the quality of the
solutions from PRISE. We further evaluate our approach on
benchmark datasets such as MSCOCO, GoogleEarth, and
GoogleMap, and demonstrate state-of-the-art results, espe-
cially for small pixel errors. Code can be downloaded from
https://github.com/Zhang-VISLab .
| 1. Introduction
Deep learning networks have achieved great success in
homography estimation by directly predicting the transfor-
mation matrix in various scenarios. However, the existing
classic algorithms still take the place for showing more ex-
plainability compared with the deep learning architectures.
Such algorithms are often rooted from well-studied theoret-
ical and empirical grounding. Current works often focus
on combining the robustness of deep learning with explain-
ability of classical algorithms to handle multimodel image
alignment such as image modality and satellite modality.
However, due to the high nonconvexity in homography esti-mation, such methods often suffer from poor local optimality.
Recently Zhao et al. [77] proposed DeepLK for multi-
model image alignment, i.e.,estimating the homography be-
tween two planar projections of the same view but across dif-
ferent modalities such as map and satellite images (see Sec.
3.1.1 for formal definition), based on the LK method [46].
This method consists of two novel components:
•A new deep neural network was proposed to map im-
ages from different modalities into the same feature space
where the LK method can align them.
•A new training algorithm was proposed as well by enforc-
ing the local change on the loss landscape should be no
less than a quadratic shape centered at the ground truth for
any image pair, with no specific reason.
Surprisingly, when we evaluate DeepLK based on the public
code1, the proposed network cannot work well without the
proposed training algorithm. This strongly motivate us to
discover the mysteries in the DeepLK training algorithm.
Deep Reparametrization. Our first insight from DeepLK
is that the deep neural network essentially maps the align-
ment problem into a much higher dimensional space by
introducing a large amount of parameters. The high dimen-
sional space provides the feasibility to reshape the loss land-
scape of the LK method. Such deep reparametrization has
been used as a means of reformulating some problems such
as shape analysis [11], super-resolution and denoising [8],
while preserving the properties and constraints in the original
problems. This insight at test time can be interpreted as
min
ω∈Ωℓ(ω;x)reparametrization= = = = = = = = = ⇒
via deep learningmin
ω∈Ωℓf(ω;x, θ∗), (1)
where x∈ X denotes the input data, ℓdenotes a nonconvex
differentiable function ( e.g., the LK loss) parametrized by
ω∈Ω,f:X × Θ→ X denotes an auxiliary function
presented by a neural network with learned weights θ∗∈Θ
(e.g., the proposed network in DeepLK), and ℓfdenotes the
1https://github.com/placeforyiming/CVPR21-Deep-
Lucas-Kanade-Homography
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13187
loss with deep reparametrization ( e.g., the DeepLK loss). In
this way, the learning problem is how to train the network
so that the optimal solutions can be located using gradient
descent (GD) given data.
Convex-like Loss Landscapes. Our second insight from
DeepLK is that the learned loss landscape from their training
algorithm tends to be convex-like (see their experimental
results). This is an interesting observation, as it is evidenced
in [39] that empirical more convex-like loss landscapes often
return better performance. However, we cannot find any ex-
plicit explanation through the paper about the reason, which
raises the following questions that we aim to address:
• Does the convex-like shape hold for any image pair?
• If so, why? Is there any guarantee on solutions?
Our Approach: Dee pStar-Convexif ied Luca s-Kanad e
(PRISE). To mitigate the issue of poor local optimality in
homography estimation, in this paper we propose a novel
approach, namely PRISE, to enforce deep neural networks
to approximately learn star-convex loss landscapes for the
downstream tasks. Recently star-convexity [49] in noncon-
vex optimization has been attracting more and more atten-
tion [27, 30, 35, 38] because of its capability of finding near-
optimal solutions based on GD with theoretical guarantees.
Star-convex functions refer to a particular class of (typically)
non-convex functions whose global optimum is visible from
every point in a downhill direction. From this view, con-
vexity is a special case of star-convexity. In the literature,
however, most of the works focus on optimizing and analyz-
ing star-convex functions, while learning such functions is
hardly explored. In contrast, our PRISE imposes additional
hinge losses, derived from the definition of star-convexity, on
the learning objective during training. At test time, the nice
convergence properties of star-convexity help find provably
near-optimal solutions for the tasks using the LK method.
We further show that DeepLK is a simplified and approxi-
mate algorithm of PRISE, and thus shares some properties
with ours, but with worse performance.
Recently [78] have shown that stochastic gradient descent
(SGD) will converge to global minimum in deep learning if
the assumption of star-convexity in the loss landscapes hold.
They validated this assumption (in a major part of training
processes) empirically using relatively shallow networks and
small-scale datasets by showing the classification training
losses can converge to zeros. Nevertheless, we argue that this
assumption may be too strong to hold in complex networks
for challenging tasks, if without any additional imposition on
learning. In our experiments we show that even we attempt to
learn star-convex loss landscapes, the outputs at both training
and test time are hardly perfect for complicated tasks.
Contributions. Our key contributions are listed as follows:
•We propose a novel PRISE method for multimodel im-
age alignment by introducing (strongly) star-convex con-straints into the network training, which is rarely explored
in the literature of deep learning.
•We provide some analysis on the quality of the solutions
from PRISE through star-convex loss landscapes.
•We demonstrate the state-of-the-art results on some bench-
mark datasets for multimodel image alignment with much
better accuracy, especially when the pixel errors are small.
|
Zhang_Aligning_Step-by-Step_Instructional_Diagrams_to_Video_Demonstrations_CVPR_2023 | Abstract
Multimodal alignment facilitates the retrieval of in-
stances from one modality when queried using another. In
this paper, we consider a novel setting where such an align-
ment is between (i) instruction steps that are depicted as
assembly diagrams (commonly seen in Ikea assembly man-
uals) and (ii) segments from in-the-wild videos; these videos
comprising an enactment of the assembly actions in the real
world. We introduce a supervised contrastive learning ap-
proach that learns to align videos with the subtle details of
assembly diagrams, guided by a set of novel losses. To study
this problem and evaluate the effectiveness of our method,
we introduce a new dataset: IAW—for Ikea assembly in the
wild—consisting of 183 hours of videos from diverse fur-
niture assembly collections and nearly 8,300 illustrations
from their associated instruction manuals and annotated
for their ground truth alignments. We define two tasks on
this dataset: First, nearest neighbor retrieval between video
segments and illustrations, and, second, alignment of in-
struction steps and the segments for each video. Extensive
experiments on IAW demonstrate superior performance of
our approach against alternatives.
| 1. Introduction
The rise of Do-It-Yourself (DIY) videos on the web has
made it possible even for an unskilled person (or a skilled
robot) to imitate and follow instructions to complete com-
plex real world tasks [4, 23, 31]. One such task that is of-
ten cumbersome to infer from instruction descriptions yet
easy to imitate from a video is the task of assembling fur-
niture from its parts. Often times the instruction steps in-
volved in such a task are depicted in pictorial form, so that
*Supported by an ANU-MERL PhD scholarship agreement.
†Supported by Marie Sklodowska-Curie grant agreement No. 893465.
‡Supported by an ARC Future Fellowship No. FT200100421.
… … …
… … …
Figure 1. An illustration of video-diagram alignment between a
YouTube video (top) He0pCeCTJQM and an Ikea furniture man-
ual (bottom) s49069795.
they are comprehensible beyond the boundaries of language
(e.g., Ikea assembly manuals). However, such instructional
diagrams can sometimes be ambiguous, unclear, or may not
match the furniture parts at disposal due to product variabil-
ity. Having access to video sequences that demonstrate the
precise assembly process could be very useful in such cases.
Unfortunately, most DIY videos on the web are created
by amateurs and often involve content that is not necessarily
related to the task at hand. For example, such videos may
include commentary about the furniture being assembled, or
personal assembly preferences that are not captured in the
instruction manual. Further, there could be large collections
of videos on the web that demonstrate the assembly pro-
cess for the same furniture but in diverse assembly settings;
watching them could consume significant time from the as-
sembly process. Thus, it is important to have a mechanism
that can effectively align relevant video segments against
the instructions steps illustrated in a manual.
In this paper, we consider this novel task as a multimodal
alignment problem [25,27], specifically for aligning in-the-
wild web videos of furniture assembly and the respective
diagrams in the instruction manuals as shown in Fig. 1. In
contrast to prior approaches for such multimodal alignment,
which usually uses audio, visual, and language modalities,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2483
our task of aligning images with video sequences brings in
several unique challenges. First, instructional diagrams can
be significantly more abstract compared to text and audio
descriptions. Second, illustrations of the assembly process
can vary subtly from step-to-step (e.g., a rectangle placed on
another rectangle could mean placing a furniture part on top
of another). Third, the assembly actions, while depicted in
a form that is easy for humans to understand, can be incom-
prehensible for a machine. And last, there need not be com-
mon standard or visual language followed when creating
such manuals (e.g., a furniture piece could be represented
as a rectangle based on its aspect ratio, or could be marked
with an identifier, such as a part number). These issues
make automated reasoning of instruction manuals against
their video enactments extremely challenging.
In order to tackle the above challenges, we propose a
novel contrastive learning framework for aligning videos
and instructional diagrams, which better suits the specifics
of our task. We utilize two important priors—a video only
needs to align with its own manual and adjacent steps in a
manual share common semantics—that we encode as terms
in our loss function with multimodal features computed
from video and image encoder networks.
To study the task in a realistic setting, we introduce a
new dataset as part of this paper, dubbed IAW for Ikea as-
sembly in the wild. Our dataset consists of nearly 8,300 il-
lustrative diagrams from 420 unique furniture types scraped
from the web and 1,005 videos capturing real-world furni-
ture assembly in a variety of settings. We used the Ama-
zon Mechanical Turk to obtain ground truth alignments of
the videos to their instruction manuals. The videos involve
significant camera motions, diverse viewpoints, changes in
lighting conditions, human poses, assembly actions, and
tool use. Such in-the-wild videos offer a compelling setting
for studying our alignment task within its full generality and
brings with it a novel research direction for exploring the
multimodal alignment problem with exciting real-world ap-
plications, e.g., robotic imitation learning, guiding human
assembly, etc.
To evaluate the performance of our learned alignment,
we propose two tasks on our dataset: (i) nearest neighbor
retrieval between videos and instructional diagrams, and (ii)
alignment of the set of instruction steps from the manual to
clips from an associated video sequence. Our experimental
results show that our proposed approach leads to promising
results against a compelling alternative, CLIP [27], demon-
strating 9.68% improvement on the retrieval task and 12%
improvement on the video-to-diagram alignment task.
|
Yang_Towards_Bridging_the_Performance_Gaps_of_Joint_Energy-Based_Models_CVPR_2023 | Abstract
Can we train a hybrid discriminative-generative model
with a single network? This question has recently been
answered in the affirmative, introducing the field of Joint
Energy-based Model (JEM) [17, 48], which achieves high
classification accuracy and image generation quality si-
multaneously. Despite recent advances, there remain two
performance gaps: the accuracy gap to the standard soft-
max classifier, and the generation quality gap to state-of-
the-art generative models. In this paper, we introduce a
variety of training techniques to bridge the accuracy gap
and the generation quality gap of JEM. 1) We incorporate
a recently proposed sharpness-aware minimization (SAM)
framework to train JEM, which promotes the energy land-
scape smoothness and the generalization of JEM. 2) We
exclude data augmentation from the maximum likelihood
estimate pipeline of JEM, and mitigate the negative im-
pact of data augmentation to image generation quality. Ex-
tensive experiments on multiple datasets demonstrate our
SADA-JEM achieves state-of-the-art performances and out-
performs JEM in image classification, image generation,
calibration, out-of-distribution detection and adversarial
robustness by a notable margin. Our code is available at
https://github.com/sndnyang/SADAJEM .
| 1. Introduction
Deep neural networks (DNNs) have achieved state-of-
the-art performances in a wide range of learning tasks, in-
cluding image classification, image generation, object de-
tection, and language understanding [21,30]. Among them,
energy-based models (EBMs) have seen a flurry of inter-
est recently, partially inspired by the impressive results of
IGEBM [10] and JEM [17], which exhibit the capability of
training generative models within a discriminative frame-
work. Specifically, JEM [17] reinterprets the standard soft-
max classifier as an EBM and achieves impressive perfor-
mances in image classification and generation simultane-
ously. Furthermore, these EBMs enjoy improved perfor-
mance on out-of-distribution detection, calibration, and ad-versarial robustness. The follow-up works (e.g., [18, 48])
further improve the training in terms of speed, stability and
accuracy.
Despite the recent advances and the appealing property
of training a single network for hybrid modeling, training
JEM is still challenging on complex high-dimensional data
since it requires an expensive MCMC sampling. Further-
more, models produced by JEM still have an accuracy gap
to the standard softmax classifier and a generation quality
gap to the GAN-based approaches.
In this paper, we introduce a few simple yet effective
training techniques to bridge the accuracy gap and gener-
ation quality gap of JEM. Our hypothesis is that both per-
formance gaps are the symptoms of lack of generalization
of JEM trained models. We therefore analyze the trained
models under the lens of loss geometry. Figure 1 visu-
alizes the energy landscapes of different models by the
technique introduced in [34]. Since different models are
trained with different loss functions, visualizing their loss
functions is meaningless for the purpose of comparison.
Therefore, the LSE energy functions (i.e., Eq. 4) of dif-
ferent models are visualized. Comparing Figure 1(a) and
(b), we find that JEM converges to extremely sharp lo-
cal maxima of the energy landscape as manifested by the
significantly large y-axis scale. By incorporating the re-
cently proposed sharpness-aware minimization (SAM) [12]
to JEM, the energy landscape of trained model (JEM+SAM)
becomes much smoother as shown in Figure 1(c). This
also substantially improves the image classification accu-
racy and generation quality. To further improve the en-
ergy landscape smoothness, we exclude data augmentation
from the maximum likelihood estimate pipeline of JEM,
and visualize the energy landscape of SADA-JEM in Fig-
ure 1(d), which achieves the smoothest landscape among all
the models considered. This further improves image gen-
eration quality dramatically while retaining or sometimes
improving classification accuracy. Since our method im-
proves the performance of JEM primarily in the framework
of sharpness-aware optimization, we refer it as SADA-JEM,
a Sharpness-Aware Joint Energy-based Model with single
branched Data Augmentation.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15732
1.00
0.75
0.50
0.25
0.000.250.500.751.001.00
0.75
0.50
0.25
0.000.250.500.751.004.55.05.56.06.57.07.58.0(a) Softmax Classifier
1.00
0.75
0.50
0.25
0.000.250.500.751.001.00
0.75
0.50
0.25
0.000.250.500.751.00200
150
100
50
0 (b) JEM
1.00
0.75
0.50
0.25
0.000.250.500.751.001.00
0.75
0.50
0.25
0.000.250.500.751.0017.5
15.0
12.5
10.0
7.5
5.0
2.5
0.0 (c) JEM+SAM
1.00
0.75
0.50
0.25
0.000.250.500.751.001.00
0.75
0.50
0.25
0.000.250.500.751.003.0
2.5
2.0
1.5
1.0
0.5
0.00.5 (d) SADA-JEM
Figure 1. Visualizing the energy landscapes [34] of different models trained on CIFAR10. Note the dramatic scale differences of the
y-axes, indicating SADA-JEM identifies the smoothest local optimum among all the methods considered.
Our main contributions are summarized as follows:
1. We investigate the energy landscapes of different mod-
els and find that JEM leads to the sharpest one, which
potentially undermines the generalization of trained
models.
2. We incorporate the sharpness-aware minimization
(SAM) framework to JEM to promote the energy land-
scape smoothness, and thus model generalization.
3. We recognize the negative impact of data augmenta-
tion in the training pipeline of JEM, and introduce two
data loaders for image classification and image gen-
eration separately, which improves image generation
quality significantly.
4. Extensive experiments on multiple datasets show that
SADA-JEM achieves the state-of-the-art discrimina-
tive and generative performances, while outperforming
JEM in calibration, out-of-distribution detection and
adversarial robustness by a notable margin.
|
Zhang_Analyzing_Physical_Impacts_Using_Transient_Surface_Wave_Imaging_CVPR_2023 | Abstract
The subtle vibrations on an object’s surface contain in-
formation about the object’s physical properties and its in-
teraction with the environment. Prior works imaged sur-
face vibration to recover the object’s material properties
via modal analysis, which discards the transient vibra-
tions propagating immediately after the object is disturbed.
Conversely, prior works that captured transient vibrations
focused on recovering localized signals ( e.g., recording
nearby sound sources), neglecting the spatiotemporal re-
lationship between vibrations at different object points. In
this paper, we extract information from the transient surface
vibrations simultaneously measured at a sparse set of object
points using the dual-shutter camera described by Sheinin
et al. [37]. We model the geometry of an elastic wave gen-
erated at the moment an object’s surface is disturbed ( e.g.,
a knock or a footstep) and use the model to localize the dis-
turbance source for various materials ( e.g., wood, plastic,
tile). We also show that transient object vibrations contain
additional cues about the impact force and the impacting
object’s material properties. We demonstrate our approach
in applications like localizing the strikes of a ping-pong ball
on a table mid-play and recovering the footsteps’ locations
by imaging the floor vibrations they create.
| 1. Introduction
Our environment is teeming with vibrations created by
the interaction of physical objects. Some vibrations, like
a knock on the door or the sound of a ball bouncing off
the ground, can be perceived by humans because they are
transmitted from the vibrating object’s surface via the air.
However, many vibrations that fill our world are too subtle
for auditory-based remote sensing. Moreover, much like
ripples in a pond, the transient spatial shapes such vibrations
create on object surfaces are a visual cue that can disclose
the source of the disturbance and other object properties.
Object vibrations can be divided into two main types:
transient and modal. For example, consider the vibrations
of a tuning fork. When struck, the impulse creates tran-
sient waves propagating from the impact source until they
[37]
Figure 1. When physical objects interact, like a ping pong ball
bouncing off the table, they create minute vibrations that propa-
gate through the objects’ surfaces and interiors. The transient vi-
brations that occur immediately on impact, exaggerated here for
visualization, carry information about the impact source location.
We image the surface vibrations at a sparse set of locations using
the imaging system of Sheinin et al. [37]. We model the elastic
wave propagation and recover the impact source locations with-
out a direct line-of-sight on the impacted surface. Visit the project
page for videos of results [1].
reach and vibrate the fork’s entire body. After a short
time interval, the transient vibrations die down, leaving the
fork to vibrate at its resonant modal frequencies. Modal
analysis, which aims to measure these resonant frequen-
cies [11, 13, 42], can reveal the tuning fork’s designed tone
(e.g., 440 Hz for the Atone) and can also be used to analyze
the fork’s material properties [9, 14, 18].
While extremely useful, modal analysis ignores the tran-
sient vibrations that occur at the moment of impact. Such
transient vibrations contain valuable cues about the dis-
turbance’s origin, its magnitude, and the properties of the
object causing the disturbance ( e.g., a falling basketball
vs. a falling rock). Prior works that did sense transient
vibrations primarily focused on localized low-dimensional
signals such as heartbeats [42, 44, 48], music and speech
[8, 15, 37, 45, 46, 48], and musical instruments [37]. These
works disregard the spatiotemporal relationship between
transient vibrations at different object points.
This paper focuses on recovering the physical loca-
tion of an impacting object from transient surface vibra-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4339
(a)setup (b)pre-reflections (c)post-reflections
Figure 2. Elastic wave propagation in isotropic objects. (a)An
electronic knocker creates repeated short knocks on a whiteboard.
For each knock, a laser Doppler vibrometer (LDV) sensor is used
to optically measure the temporal vertical displacement at a single
point. Aggregating and synchronizing measurements from multi-
ple board points generates a video showing the surface displace-
ment with time. (b)Displacement 1ms after impact. Observe the
circular shape of the outgoing wave. (c)Displacement 3.1ms after
impact. Here, the outgoing wave has reflected from the board’s
boundaries.
tions measured simultaneously at multiple surface points
using the dual-shutter camera of Sheinin et al. [37]. This
task opens the door to potential applications like localizing
sound sources in walls ( e.g., pipe bursting), localizing bullet
or bird impacts on airplanes mid-flight, or impacts on ship
hulls from dockside, tugs, or other debris, localizing shell-
ground impacts on battlefields, localizing people in building
fires or hostage situations by observing external vibrations
on ceilings or side walls, and more.
While, in general, object shape and material determine
its vibration profile, we show that immediately after impact,
there exists a short time interval ( ∼1.5 ms long) where the
surface vibrations can be modeled as an outwardly propa-
gating elastic wave. We derive an approximate model of the
wave’s geometry for both isotropic and anisotropic materi-
als and develop a backprojection-based algorithm to local-
ize the impact sources using the vibrations within this time
interval. Unlike prior works that merely visualize acoustic
wave propagation [36], we explicitly model its transient be-
havior and show that only a sparse set of points is required
to determine the wave’s source.
We verified our approach on various materials, including
wood, plastic, glass, porcelain, and gypsum. In our exper-
iments, we localized impact sources with an average error
between 1.1 cm and 2.9 cm for 40 cm ×40 cm and 90 cm
×90 cm surfaces, respectively. We also show applications
like localizing ping-pong ball strikes on the table mid-play
and localizing footsteps through floor vibrations beyond a
camera’s line of sight.
Beyond impact localization, we show that the transient
surface vibrations can convey more information about the
impacting object and the impacted surface. For surfaces
of unknown material, we estimate the material anisotropy
by measuring vibrations at known surface points and fit-
ting a material-specific wave propagation model parameter.
Our preliminary experiments suggest that the transient vi-brations’ amplitudes relate to the force applied to disturb
the object [20, 28, 31], and that the vibrations’ frequency
content depends on the stiffness and shape on the impacting
object. We thus believe our work can inspire a new class of
transient vibration imaging approaches that opens the door
for novel vision tasks.
|
Yu_DyLiN_Making_Light_Field_Networks_Dynamic_CVPR_2023 | Abstract
Light Field Networks, the re-formulations of radiance
fields to oriented rays, are magnitudes faster than their co-
ordinate network counterparts, and provide higher fidelity
with respect to representing 3D structures from 2D obser-
vations. They would be well suited for generic scene rep-
resentation and manipulation, but suffer from one problem:
they are limited to holistic and static scenes. In this pa-
per, we propose the Dynamic Light Field Network (DyLiN)
method that can handle non-rigid deformations, including
topological changes. We learn a deformation field from in-
put rays to canonical rays, and lift them into a higher di-
mensional space to handle discontinuities. We further in-
troduce CoDyLiN, which augments DyLiN with controllable
attribute inputs. We train both models via knowledge distil-
lation from pretrained dynamic radiance fields. We eval-
uated DyLiN using both synthetic and real world datasets
that include various non-rigid deformations. DyLiN qual-
itatively outperformed and quantitatively matched state-of-
the-art methods in terms of visual fidelity, while being 25−
71×computationally faster. We also tested CoDyLiN on at-
tribute annotated data and it surpassed its teacher model.
Project page: https://dylin2023.github.io .
| 1. Introduction
Machine vision has made tremendous progress with re-
spect to reasoning about 3D structure using 2D observa-tions. Much of this progress can be attributed to the emer-
gence of coordinate networks [6,21,26], such as Neural Ra-
diance Fields (NeRF) [23] and its variants [2, 20, 22, 39].
They provide an object agnostic representation for 3D
scenes and can be used for high-fidelity synthesis for unseen
views. While NeRFs mainly focus on static scenes, a series
of works [10,27,29,34] extend the idea to dynamic cases via
additional components that map the observed deformations
to a canonical space, supporting moving and shape-evolving
objects. It was further shown that by lifting this canonical
space to higher dimensions the method can handle changes
in scene topology as well [28].
However, the applicability of NeRF models is consid-
erably limited by their computational complexities. From
each pixel, one typically casts a ray from that pixel, and nu-
merically integrates the radiance and color densities com-
puted by a Multi-Layer Perceptron (MLP) across the ray,
approximating the pixel color. Specifically, the numeri-
cal integration involves sampling hundreds of points across
the ray, and evaluating the MLP at all of those locations.
Several works have been proposed for speeding up static
NeRFs. These include employing a compact 3D represen-
tation structure [9, 18, 43], breaking up the MLP into multi-
ple smaller networks [30,31], leveraging depth information
[7, 24], and using fewer sampling points [17, 24, 42]. Yet,
these methods still rely on integration and suffer from sam-
pling many points, making them prohibitively slow for real-
time applications. Recently, Light Field Networks (LFNs)
[32] proposed replacing integration with a direct ray-to-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12397
color regressor, trained using the same sparse set of images,
requiring only a single forward pass. R2L [36] extended
LFNs to use a very deep residual architecture, trained by
distillation from a NeRF teacher model to avoid overfit-
ting. In contrast to static NeRF acceleration, speeding up
dynamic NeRFs is a much less discussed problem in the
literature. This is potentially due to the much increased dif-
ficulty of the task, as one also has to deal with the high
variability of motion. In this direction, [8, 38] greatly re-
duce the training time by using well-designed data struc-
tures, but their solutions still rely on integration. LFNs are
clearly better suited for acceleration, yet, to the best of our
knowledge, no works have attempted extending LFNs to the
dynamic scenario.
In this paper, we propose 2 schemes extending LFNs
to dynamic scene deformations, topological changes and
controllability. First, we introduce DyLiN, by incorpo-
rating a deformation field and a hyperspace representa-
tion to deal with non-rigid transformations, while distilling
knowledge from a pretrained dynamic NeRF. Afterwards,
we also propose CoDyLiN, via adding controllable input
attributes, trained with synthetic training data generated
by a pretrained Controllable NeRF (CoNeRF) [13] teacher
model. To test the efficiencies of our proposed schemes, we
perform empirical experiments on both synthetic and real
datasets. We show that our DyLiN achieves better image
quality and an order of magnitude faster rendering speed
than its original dynamic NeRF teacher model and the state-
of-the-art TiNeuV ox [8] method. Similarly, we also show
that CoDyLiN outperforms its CoNeRF teacher. We further
execute ablation studies to verify the individual effective-
ness of different components of our model. Our methods
can be also understood as accelerated versions of their re-
spective teacher models, and we are not aware of any prior
works that attempt speeding up CoNeRF.
Our contributions can be summarized as follows:
• We propose DyLiN, an extension of LFNs that can
handle dynamic scenes with topological changes.
DyLiN achieves this through non-bending ray defor-
mations, hyperspace lifting for whole rays, and knowl-
edge distillation from dynamic NeRFs.
• We show that DyLiN achieves state-of-the-art results
on both synthetic and real-world scenes, while being
an order of magnitude faster than the competition. We
also include an ablation study to analyze the contribu-
tions of our model components.
• We introduce CoDyLiN, further extending our DyLiN
to handle controllable input attributes. |
Yang_Modeling_Entities_As_Semantic_Points_for_Visual_Information_Extraction_in_CVPR_2023 | Abstract
Recently, Visual Information Extraction (VIE) has been
becoming increasingly important in both the academia and
industry, due to the wide range of real-world applications.
Previously, numerous works have been proposed to tackle
this problem. However, the benchmarks used to assess these
methods are relatively plain, i.e., scenarios with real-world
complexity are not fully represented in these benchmarks.
As the first contribution of this work, we curate and re-
lease a new dataset for VIE, in which the document im-
ages are much more challenging in that they are taken from
real applications, and difficulties such as blur, partial oc-
clusion, and printing shift are quite common. All these fac-
tors may lead to failures in information extraction. There-
fore, as the second contribution, we explore an alternative
approach to precisely and robustly extract key information
from document images under such tough conditions. Specif-
ically, in contrast to previous methods, which usually ei-
ther incorporate visual information into a multi-modal ar-
chitecture or train text spotting and information extraction
in an end-to-end fashion, we explicitly model entities as se-
mantic points, i.e., center points of entities are enriched
with semantic information describing the attributes and re-
lationships of different entities, which could largely bene-
fit entity labeling and linking. Extensive experiments on
standard benchmarks in this field as well as the proposed
*Equal Contribution.
†Correspondence Author.dataset demonstrate that the proposed method can achieve
significantly enhanced performance on entity labeling and
linking, compared with previous state-of-the-art models.
Dataset is available at https://www.modelscope.
cn/datasets/damo/SIBR/summary .
| 1. Introduction
Visually Rich Documents (VRDs) are ubiquitous in
daily, industrial, and commercial activities, such as receipts
of shopping, reports of physical examination, product man-
uals, and bills of entry. Visual Information Extraction (VIE)
aims to automatically extract key information from these
VRDs, which can significantly facilitate subsequent pro-
cessing and analysis. Due to its broad applications and
grand technical challenges, VIE has recently attracted con-
siderable attention from both the Computer Vision com-
munity [33, 34, 39] and the Natural Language Processing
community [16,35,37]. Typical techniques for tackling this
challenging problem include essential electronic conversion
of image (OCR) [28, 30, 41], intermediate procedure of
structure analysis [25] and high-level understanding of con-
tents [35], among which entities play an important role as
an aggregation of vision, structure, and language.
Though substantial progresses [11, 19, 35] have been
made, it is still challenging to precisely and reliably extract
key information from document images in unconstrained
conditions. As shown in Fig. 1, in real-world scenarios doc-
uments may have various formats, be captured casually with
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15358
a mobile phone, or exist occlusion or shift in printing, all of
which would pose difficulties for VIE algorithms.
To highlight the challenges in real applications and pro-
mote the development of research in VIE, we establish
a new dataset called Structurally-rich Invoices, Bills and
Receipts in the Wild ( SIBR for short), which contains 1,000
images with 71,227 annotated entity instances and 39,004
entity links. The challenges of SIBR lie in: (1) The docu-
ments are from different real-world scenarios, so their for-
mats and structures might be complicated and varying; (2)
The image quality may be very poor, i.e., blur, noise, and
uneven illumination are frequently seen; (3) The printing
process is imperfect that shift and rotation might happen.
To deal with these difficulties, we explore an novel ap-
proach for information extraction from VRDs. Different
from previous methods, which usually employ a sequen-
tial pipeline that first uses an off-the-shelf OCR engine to
detect and read textual information (location and content)
and then fuses such information with visual cues for follow-
up entity labeling ( a.k.a. entity extraction) and linking in a
multi-modal architecture (mostly a Transformer) [19, 37],
the proposed method adopts a unified framework that all
components, including text detection, text recognition, en-
tity extraction and linking, are jointly modeled and trained
in an integrated way. This means that in our method a sep-
arate OCR engine is no longer necessary. The benefits are
two-fold: (1) The accuracy of entity labeling and linking
will not be limited by the capacity of the OCR engine; (2)
The running speed of the whole pipeline could be boosted.
Drawing inspirations from general object detection [4,
15, 40, 42] and vision-language joint learning [11, 14, 31],
we put forward to model entities as semantic points ( ESP
for short). Specifically, as shown in Fig. 3, entities are rep-
resented using their center points, which are enriched with
semantics, such as geometric and linguistic information, to
perform entity labeling and linking. To better learn a joint
vision-language representation, we also devise three train-
ing tasks that are well integrated into the paradigm. The
entity-image text matching ( EITM ) task, which is only used
in the pre-training stage, learns to align entity-level vision
vectors and language vectors (encoded with off-the-shell
BERT) with a contrastive learning paradigm. Entity ex-
traction ( EE) and Entity linking ( EL), the main tasks for
VIE, are used in the pre-training, fine-tuning, and inference
stages. In these two modules, region features and position
embedding (from ground truth or detection branch) are en-
coded with transformer layers and then decoded to entity
classes and relations. Owing to the joint vision-language
representation, text recognition is no longer a necessary
module in our framework, and we will discuss the impact
of the text recognition branch in Sec. 5.4.
Extensive experiments have been conducted on stan-
dard benchmarks for VIE (such as FUNSD, XFUND, andCORD) as well as the proposed SIBR dataset. We found
that compared with previous state-of-the-art methods, the
proposed ESP algorithm can achieve highly competitive
performance. Especially, it shows an advantage in the task
of entity linking. Our main contributions can be summa-
rized as follows: (1) We curate and release a new dataset
for VIE, in which the document images are with real-world
complexity and difficulties. (2) We devise a unified frame-
work for spotting, labeling and linking entities, where a sep-
arate OCR engine is unnecessary. (3) We adopt three vision-
language joint modeling tasks for learning informative rep-
resentation for VIE. (4) Extensive experiments demonstrate
the effectiveness and advantage of our approach.
|
Yu_Block_Selection_Method_for_Using_Feature_Norm_in_Out-of-Distribution_Detection_CVPR_2023 | Abstract
Detecting out-of-distribution (OOD) inputs during the
inference stage is crucial for deploying neural networks in
the real world. Previous methods typically relied on the
highly activated feature map outputted by the network. In
this study, we revealed that the norm of the feature map
obtained from a block other than the last block can serve
as a better indicator for OOD detection. To leverage this
insight, we propose a simple framework that comprises
two metrics: FeatureNorm , which computes the norm of
the feature map, and NormRatio , which calculates the ra-
tio of FeatureNorm for ID and OOD samples to evaluate
the OOD detection performance of each block. To iden-
tify the block that provides the largest difference between
FeatureNorm of ID and FeatureNorm of OOD, we cre-
ate jigsaw puzzles as pseudo OOD from ID training sam-
ples and compute NormRatio, selecting the block with the
highest value. After identifying the suitable block, OOD
detection using FeatureNorm outperforms other methods
by reducing FPR95 by up to 52.77% on CIFAR10 bench-
mark and up to 48.53% on ImageNet benchmark. We
demonstrate that our framework can generalize to vari-
ous architectures and highlight the significance of block se-
lection, which can also improve previous OOD detection
methods. Our code is available at https://github.com/gist-
ailab/block-selection-for-OOD-detection.
| 1. Introduction
Neural networks have widely been utilized in the real
world, such as in autonomous cars [9, 21] and medical di-
agnoses [7,38]. In the real world, neural networks often en-
counter previously unseen input that are different from the
training data. If the system fails to recognize those input as
unknown input, there can be a dangerous consequence. For
example, a medical diagnosis system may recognize an un-
seen disease image as one of the known diseases. This gives
rise to the importance of the out-of-distribution (OOD) de-
tection, which makes users operate a neural network system
more safely in the real world.
Figure 1. Histogram of norm of the feature map produced by con-
volutional blocks of ResNet18. In last block (a), the norm of ID
(black) is hard to separate from OOD (blue, orange) compared to
the one from the penultimate block (b).
In practice, various outputs of the network can be used
as an indicator to separate the in-distribution (ID) and out-
of-distribution (OOD) data. For instance, output probabil-
ity [15], calibrated output probability [28], and output en-
ergy [30] are used as an indicator. The output of a neural
network is commonly calculated using a feature vector of
the feature extractor and a weight vector of the classifica-
tion layer. It is known that the norm of the feature vector
can be an indicator of input image quality [23, 37, 40] or
level of awareness [46]. Thus, we ask the following ques-
tion: Can we use the norm of the feature as an indicator to
separate ID and OOD?
In this paper, we first reveal the key observation con-
cerning the last block of neural networks sometimes deteri-
orating owing to the overconfidence issue [10, 11]. Empir-
ically, we show that OOD images highly activate filters of
the last block (i.e., large norm; see Figure 1, left) on a net-
work trained with CIFAR10 while lowly activate filters of
the penultimate block (i.e., small norm; see Figure 1, right).
As a result, OOD detection methods that consider overacti-
vated feature [42] and overconfident output [28] have been
successful. However, we find that the norm of the feature
map for the OOD and ID is quite separable in the penulti-
mate block compared to the last block.
This motivates a simple and effective OOD detection
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15701
Figure 2. Illustration of our proposed out-of-distribution detection framework. FeatureNorm refers to a norm calculation for the given
feature map produced by the block. We use NormRatio of ID and pseudo OOD (i.e., jigsaw puzzles) to find which block is suitable for
OOD detection (a). During inference time, for a given input image, the OOD score is calculated by FeatureNorm on the selected block (b).
IfFeatureNorm for a given input is smaller than the threshold, the given |
Zhang_Extracting_Motion_and_Appearance_via_Inter-Frame_Attention_for_Efficient_Video_CVPR_2023 | Abstract
Effectively extracting inter-frame motion and appear-
ance information is important for video frame interpolation
(VFI). Previous works either extract both types of informa-
tion in a mixed way or devise separate modules for each
type of information, which lead to representation ambiguity
and low efficiency. In this paper, we propose a new mod-
ule to explicitly extract motion and appearance information
via a unified operation. Specifically, we rethink the infor-
mation process in inter-frame attention and reuse its at-
tention map for both appearance feature enhancement and
motion information extraction. Furthermore, for efficient
VFI, our proposed module could be seamlessly integrated
into a hybrid CNN and Transformer architecture. This hy-
brid pipeline can alleviate the computational complexity
of inter-frame attention as well as preserve detailed low-
level structure information. Experimental results demon-
strate that, for both fixed- and arbitrary-timestep interpo-
lation, our method achieves state-of-the-art performance
on various datasets. Meanwhile, our approach enjoys a
lighter computation overhead over models with close per-
formance. The source code and models are available at
https://github.com/MCG-NJU/EMA-VFI .
| 1. Introduction
As a fundamental low-level vision task, the goal of
video frame interpolation (VFI) is to generate intermediate
frames given a pair of consecutive frames [17, 33]. It has
a wide range of real-life applications, such as video com-
pression [53], novel-view rending [13,47], and slow-motion
video creation [19]. In general, VFI can be seen as the pro-
cess of capturing the motion between consecutive frames
and then blending the corresponding appearance to synthe-
size the intermediate frames. From this perspective, the mo-
tion and appearance information between input frames is
essential for achieving excellent performance in VFI tasks.
*: Corresponding author ([email protected]).
Figure 1. Illustration of various approaches in video frame inter-
polation for acquiring motion and appearance information.
Concerning the extraction paradigm of motion and ap-
pearance information, the current VFI approaches can be
divided into two categories. The first is to handle both ap-
pearance and motion information in a mixed way [2,11,14,
17, 20, 21, 30, 33, 37, 38, 44], as shown in Fig. 1(a). The
two neighboring frames are directly concatenated and fed
into a backbone composed of stacked similar modules to
generate features with mixed motion and appearance infor-
mation. Though simple, this approach requires an elabo-
rate design and high capacity in the extractor module, as it
needs to deal with both motion and appearance information
jointly. The absence of explicit motion information also re-
sults in limitations for arbitrary-timestep interpolation. The
second category, as shown in Fig. 1(b), is to design sep-
arate modules for motion and appearance information ex-
traction [9, 18, 35, 40–42, 45, 56]. This approach requires
additional modules, such as cost volume [18, 40, 41], to ex-
tract motion information, which often imposes a high com-
putational overhead. Also, only extracting appearance fea-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5682
tures from a single frame fails to capture the correspondence
of appearance information of the same regions between
frames, which is an effective cue for the VFI task [18].
To address the issues of the above two extraction
paradigms, in this paper, we propose to explicitly extract
both motion and appearance information via a unified op-
eration of inter-frame attention. With a single inter-frame
attention, as shown in Fig. 1(c), we are able to enhance
the appearance features between consecutive frames and ac-
quire motion features at the same time by reusing the atten-
tion maps. This basic processing unit could be stacked to
obtain the hierarchical motion and appearance information.
Specifically, for any patch in the current frame, we take it
as the query and its temporal neighbors as keys and values
to derive an attention map representing their temporal cor-
relation. After that, the attention map is leveraged to aggre-
gate the appearance features of neighbors to contextualize
the current region representation. In addition, the attention
map is also used to weight the displacement of neighbors
to get an approximate motion vector of the patch from the
current frame to the neighbor frame. Finally, the obtained
features are utilized with light networks for motion estima-
tion and appearance refinement to synthesize intermediate
frames. Compared with previous works, our design enjoys
three advantages. (1) The appearance features of each frame
can be enhanced with each other yet not be mixed with mo-
tion features to preserve the detailed static structure infor-
mation. (2) The obtained motion features can be scaled by
time and then used as cues to guide the generation of frames
at any moment between input frames. (3) We only need to
control the complexity and the number of modules to bal-
ance the overall performance and the inference speed.
Directly using inter-frame attention on original reso-
lution results in huge memory usage and computational
overhead. Inspired by some recent works [8, 12, 26, 49,
54, 55, 58], which combines Convolutional Neural Net-
work (CNN) [23] with Transformer [48] to improve the
model learning ability and robustness, we adopt a sim-
ple but effective architecture: first utilize CNN to extract
high-resolution low-level features and then use Transformer
blocks equipped with inter-frame attention to extracting
low-resolution motion features and inter-frame appearance
features. Our proposed module could be seamlessly inte-
grated into this hybrid pipeline to extract motion and ap-
pearance features efficiently without losing fine-grained in-
formation. Our contributions are summarized as follows:
• We propose to utilize inter-frame attention to extract
both motion and appearance information simultane-
ously for video frame interpolation.
• An hybrid CNN and Transformer design is adopted
to overcome the overhead bottleneck of the inter-
frame attention at high-resolution input while preserv-ing fine-grained information.
• Our model achieves state-of-the-art performance on
various datasets while being efficient compared to
models with similar performance.
|
Zhang_Federated_Domain_Generalization_With_Generalization_Adjustment_CVPR_2023 | Abstract
Federated Domain Generalization (FedDG) attempts to
learn a global model in a privacy-preserving manner that
generalizes well to new clients possibly with domain shift.
Recent exploration mainly focuses on designing an unbi-
ased training strategy within each individual domain. How-
ever, without the support of multi-domain data jointly in
the mini-batch training, almost all methods cannot guar-
antee the generalization under domain shift. To overcome
this problem, we propose a novel global objective incorpo-
rating a new variance reduction regularizer to encourage
fairness. A novel FL-friendly method named Generaliza-
tion Adjustment (GA) is proposed to optimize the above ob-
jective by dynamically calibrating the aggregation weights.
The theoretical analysis of GA demonstrates the possibility
to achieve a tighter generalization bound with an explicit
re-weighted aggregation, substituting the implicit multi-
domain data sharing that is only applicable to the con-
ventional DG settings. Besides, the proposed algorithm is
generic and can be combined with any local client training-
based methods. Extensive experiments on several bench-
mark datasets have shown the effectiveness of the proposed
method, with consistent improvements over several FedDG
algorithms when used in combination. The source code
is released at https://github.com/MediaBrain-
SJTU/FedDG-GA
| 1. Introduction
Federated Learning (FL) has recently emerged as a
prevalent privacy-preserving paradigm for collaborative
learning on distributed data [32]. Existing studies mainly
investigate the problem of how to improve the conver-
gence and performance of the source clients’ data distribu-
tion [18, 27, 44]. A more practical problem, how to make
models trained on sites of heterogeneous distributions gen-
Aggregation
on
client modelsMini-Batch
Training
Unseen T est
DomainSource Domains for DG Source Domains for FedDG
Generalization
modelClient 1 Client 2 Client 3
DG FedDGSOTA DG requires
access multi-domains
to capture the
invariant patternsFigure 1. The difference between DG and FedDG is whether the
domains are isolated in training. Specifically, previous SOTA DG
methods that require access to multiple domains in the mini-batch
training are inapplicable to FedDG.
eralize to target clients of unknown distributions, i.e.Feder-
ated Domain Generalization (FedDG) [30], remains under-
explored. While label distribution shift has been considered
in traditional FL, FedDG focuses on the domain shift among
clients and considers each client as an individual domain.
The challenge lies in the domain shift [19] both among the
training clients and from training to testing clients.
While FedDG shares a similar goal as standard Do-
main Generalization (DG) [4,12,40], i.e.,generalizing from
multi-source domains to unseen domains, it disallows di-
rect data sharing among clients, as shown in Figure 1,
which makes most existing DG methods hardly applica-
ble. Current methods for FedDG focus on unbiased lo-
cal training within each isolated domain. As the first at-
tempt, Liu et al. [30] propose a meta-learning framework
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3954
with Fourier-based augmentation during the local training
for better generalization. Jiang et al . [17] further pro-
pose constraining local models’ flatness on top of a simi-
lar Fourier-based normalization method. However, only fo-
cusing on an improved local training strategy cannot guar-
antee that the global model is generalizable enough to un-
seen domains. Instead, a common practice for aggregat-
ing local models into a global model is by fixed weights as
in FedAvg [32], assuming that each client constantly con-
tributes to the global model. Even the subsequent improve-
ments from the federated optimization perspective, e.g.,
FedNova [44], are mainly designed for the statistical het-
erogeneity of the same domain, not for the setting of treat-
ing each client as an individual domain. Yuan et al. [50]
have suggested that domains tend to contribute non-equally
to the global model and ignoring their differences may sig-
nificantly reduce the model’s generalizability.
As one has no clue regarding to the distribution of unseen
domains, it is reasonable to assume that a global model with
fair performance among all clients may lead to better gener-
alization performance. We thus introduce a new fairness ob-
jective measured by the variance of the generalization gaps
among different source domains. The data privacy issue in
the FL setting has prevented direct optimization of the pro-
posed objective. We thus design a novel privacy-preserving
method named Generalization Adjustment to optimize the
objective. At the high level, GA leverages the domain flat-
ness constraint, a surrogate of the intractable domain di-
vergence constraint, to approximately explore the optimal
domain weights. Technically, we use a momentum mech-
anism to dynamically compute a weight for each isolated
domain by tracing the domain generalization gap, which is
then involved in the aggregation of FedDG to enhance the
generalization ability. Because the gap information does
not contain any domain information of each client, GA will
not cause additional risk of privacy leakage. Meanwhile,
the theoretical analysis of our method shows that a tighter
generalization bound is achieved by setting the aggregation
weights inversely proportional to the generalization gaps,
which leads to reduced variance in generalization gaps. The
contribution of our paper is summarized as follows:
• We introduce a novel optimization objective for FedDG
with a new variance reduction regularizer, which can con-
strain the fairness of the global model.
• We design an FL-friendly method named Generalization
Adjustment to tackle the aforementioned novel objective.
Our theoretical analysis has revealed that GA leads to a
tighter generalization bound for FedDG.
• Extensive experiments on a range of benchmark datasets
have shown consistent improvement when combining GA
with different federated learning algorithms. |
Yu_OSRT_Omnidirectional_Image_Super-Resolution_With_Distortion-Aware_Transformer_CVPR_2023 | Abstract
Omnidirectional images (ODIs) have obtained lots of re-
search interest for immersive experiences. Although ODIsrequire extremely high resolution to capture details of theentire scene, the resolutions of most ODIs are insufficient.
Previous methods attempt to solve this issue by image
super-resolution (SR) on equirectangular projection (ERP)images. However , they omit geometric properties of ERP inthe degradation process, and their models can hardly gener-alize to real ERP images. In this paper , we propose Fisheye
downsampling, which mimics the real-world imaging pro-
cess and synthesizes more realistic low-resolution samples.Then we design a distortion-aware Transformer (OSRT) tomodulate ERP distortions continuously and self-adaptively.Without a cumbersome process, OSRT outperforms previ-ous methods by about 0.2dB on PSNR. Moreover , we pro-
pose a convenient data augmentation strategy, which syn-thesizes pseudo ERP images from plain images. This simplestrategy can alleviate the over-fitting problem of large net-works and significantly boost the performance of ODISR.
Extensive experiments have demonstrated the state-of-the-art performance of our OSRT.
| 1. Introduction
In pursuit of the realistic visual experience, omnidi-
rectional images (ODIs), also known as 360◦images or
panoramic images, have obtained lots of research interest inthe computer vision community. In reality, we usually viewODIs with a narrow field-of-view (FOV), e.g., viewing in a
headset. To capture details of the entire scene, ODIs requireextremely high resolution, e.g.,4 K×8K [ 1]. However, due
to the high industrial cost of camera sensors with high pre-cision, the resolutions of most ODIs are insufficient.
Recently, some attempts have been made to solve this
problem by image super-resolution (SR) [ 12,15,28,39,40].
*Equal contribution
†Corresponding author (e-mail: [email protected])
Unseen LR LAU-Net [ 12] w/o Fisheye
OSRT w/o Fisheye OSRT w/ Fisheye
Figure 1. Visual comparisons of ×8 SR results on LR images1with
unknown degradations. Fisheye denotes that the downsamplingprocess in training stages is under Fisheye images.
As most of the ODIs are stored and transmitted in the
equirectangular projection (ERP) type, the SR process isusually performed on the ERP images. To generate high-/low-resolution training pairs, existing ODISR methods[12,15,28,39,40] directly apply uniform bicubic down-
sampling on the original ERP images (called ERP down-
sampling), which is identical to general image SR settings
[24,43]. While omitting geometric properties of ERP in
the degradation process, their models can hardly general-ize to real ERP images. We can observe missing struc-tures and blur textures in Fig. 1. Therefore, we need a
more appropriate degradation model before studying SR al-
gorithms. In practice, ODIs are acquired by the fisheyelens and stored in ERP . Given that the low-resolution is-sue in real-world scenarios is caused by insufficient sensorprecision and density, the downsampling process should beapplied to original-formatted images before converting intoother storage types. Thus, to be conformed with real-worldimaging processes, we propose to apply uniform bicubic
1Photoed by Peter Leth on Flickr, with CC license .
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13283
downsampling on Fisheye images, which are the original
format of ODIs. The new downsampling process (calledFisheye downsampling) applies uniform bicubic downsam-pling on Fisheye images before converting them to ERP im-ages. Our Fisheye downsampling is more conducive to ex-ploring the geometric property of ODIs.
The key issue of ODISR algorithm design is to utilize
the geometric properties of ERP images, which is also thefocus of previous methods. For example, Nishiyama et al.
[28] add a distortion-related condition as an additional in-
put. LAU-Net [ 12] splits the whole ERP image into patches
by latitude band and learns upscaling processes separately.However, the separated learning process will lead to infor-mation disconnection between adjacent patches. SphereSR[40] learns different upscaling functions on various projec-
tion types, but will inevitably introduce multiple-time com-
putation costs. To push the performance upper bound, wepropose the first Transformer for Omnidirectional image
Super-Resolution (OSRT), and incorporate geometric prop-erties in a distortion-aware manner. Specifically, to mod-ulate distorted feature maps, we implement feature-levelwarping, in which offsets are learned from latitude condi-
tions. In OSRT, we introduce two dedicated blocks to adaptlatitude-related distortion: distortion-aware attention block(DAAB), and distortion-aware convolution block (DACB).DAAB and DACB are designed to perform distortion mod-ulation in arbitrary Transformers and ConvNets. Thesetwo blocks can directly replace the multi-head self-attentionblock and convolution layer, respectively. The benefit ofDAAB and DACB can be further improved when being in-serted into the same backbone network. OSRT outperformsprevious methods by about 0.2dB on PSNR (Tab. 2).
However, the increase of network capacity will also en-
large the overfitting problem of ODISR, which is rarelymentioned before. The largest ODIs dataset [ 12] contains
only 1K images, which cannot provide enough diversity fortraining Transformers. Given that acquiring ODIs requiresexpensive equipment and tedious work, we propose to gen-erate distorted ERP samples from plain images for data aug-mentation. In practice, we regard a plain image as a sampledperspective, and project it back to the ERP format. Then wecan introduce 146K additional training patches, 6 times of
the previous dataset. This simple strategy can significantlyboost the performance of ODISR (Tab. 4) and alleviate the
over-fitting problem of large networks (Fig. 9). A similar
data augmentation method is also applied in Nishiyama et
al.[28], but shows marginal improvement on small models
under ERP downsampling settings.
Our contributions are threefold. 1)F o r problem formula-
tion : To generate more realistic ERP low-resolution images,
we propose Fisheye downsampling, which mimics the real-world imaging process. 2)F o r method : Combined with the
geometric properties of ERP , we design a distortion-awareTransformer, which modulates distortions continuously and
self-adaptively without cumbersome process. 3)F o r data :
To reduce overfitting, we propose a convenient data aug-mentation strategy, which synthesizes pseudo ERP imagesfrom plain images. Extensive experiments have demon-strated the state-of-the-art performance of our OSRT
2.
|
Yang_Resource-Efficient_RGBD_Aerial_Tracking_CVPR_2023 | Abstract
Aerial robots are now able to fly in complex environ-
ments, and drone-captured data gains lots of attention in
object tracking. However, current research on aerial per-
ception has mainly focused on limited categories, such as
pedestrian or vehicle, and most scenes are captured in ur-
ban environments from a birds-eye view. Recently, UAVs
equipped with depth cameras have been also deployed for
more complex applications, while RGBD aerial tracking
is still unexplored. Compared with traditional RGB ob-
ject tracking, adding depth information can more effec-
tively deal with more challenging scenes such as target
and background interference. To this end, in this paper,
we explore RGBD aerial tracking in an overhead space,
which can greatly enlarge the development of drone-based
visual perception. To boost the research, we first propose a
large-scale benchmark for RGBD aerial tracking, contain-
ing 1,000 drone-captured RGBD videos with dense annota-
tions. Then, as drone-based applications require for real-
time processing with limited computational resources, we
also propose an efficient RGBD tracker named EMT. Our
tracker runs at over 100 fps on GPU, and 25 fps on the edge
platform of NVidia Jetson NX Xavier, benefiting from its ef-
ficient multimodal fusion and feature matching. Extensive
experiments show that our EMT achieves promising track-
ing performance. All resources are available at https://
github.com/yjybuaa/RGBDAerialTracking .
| 1. Introduction
Aerial robots have been widely used in complex mis-
sions. For example, Unmanned Aerial Vehicles (UA Vs)
equipped with cameras are able to perceive and understand
unknown environments and have wide applications on agri-
culture and surveillance [11, 43]. Specifically, color-based
visual tracking with drones has been rapidly developed,
thanks to large-scale datasets [27, 43] and dedicated algo-
†Equal contribution. ∗Corresponding author.rithms [2–4, 9, 10, 12, 17, 24, 35]. However, these UA Vs
merely equipped with color-based sensors generally fail to
deal with the challenges in complex environments, such as
background clutters and dark scenes, which break the visi-
bility and illumination limitations in color-only domain. For
example, current drones have difficulties on tracking a per-
son in dark scenes. While, RGBD tracking is effective to
tackle such kinds of tracking failures.
However, for a long time, depth sensors are only incor-
porated with UA Vs to enable aerial autonomy and collision
avoidance [14]. Visual perception like RGBD tracking with
drones is unexplored due to the multiple limitations. For
example, commercial RGBD sensors are strictly limited by
application scenarios and depth measurement range. On the
other hand, we notice that current UA V tracking datasets
record video sequences in the manner of aerial photogra-
phy [8, 43]. The captured objects mainly focus on pedes-
trians and vehicles, and the captured scenes are in urban
environments from a birds-eye view.
In this work, we explore RGBD aerial tracking from
a more practical viewpoint. Different from existing UA V
tracking works, we focus on the unexplored overhead space
(2 - 5 meters above the ground), aiming to save the ground
space greatly with drone-based visual perception. Instead
of mainly focusing on people and vehicles, our research can
include more generic objects of different categories, such as
hands, cups, or balls. Thus, multimodal aerial platforms in
this space are very important, as flying robots with short-
range perception capabilities can potentially be used in a
wider range of scenarios, such as human-robot interaction.
Notably, the new task brings challenges in drone-based
visual perception, which can be concluded as follows:
Complex real-world circumstances. The real-world
flight comes with complicated and changeable natural en-
vironments. On the one hand, the high mobility of drones
brings intense pose changes, resulting in huge variations of
target scale and considerable motion blurs. Except for the
common challenges in visible situations, drone vision also
suffers from other problems like low illumination, similar
objects and background clutter.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13374
Limited onboard computational resources. In practi-
cal applications, flying platforms generally require higher
efficiency on edge platforms with limited resources, while
state-of-the-art trackers can only run on powerful GPUs.
Especially for multimodal trackers, the model efficiency is
always the least valued in model design.
Real-time practical applications. Real time is a basic
requirement in aerial tracking. Moving platforms require
real-time responses and real-world applications also require
trackers to function in real-time speed. However, most of
current state-of-the-art trackers even cannot achieve real-
time speed on powerful GPUs, not to mention their real-
world applications.
Therefore, to achieve UA V visual tracking with depth,
we first build a novel RGBD aerial platform to collect
videos. The platform is particularly designed to simulate
the environments in real-world applicaitions. The captured
videos can comprehensively reflect those challenges to be
tackled. Using this aerial platform, a large-scale dataset
forDrone-based RGB Daerial tracking, named D2Cube , is
built. Some examples in our dataset are given in Fig. 1.
In total, 1,000 sequences are provided with dense bound-
ing box annotations. The settings of captured videos cover
diverse scenarios in daily life.
Furthermore, we propose an efficient tracker named
EMT to facilitate the development of on-board RGBD
tracking. The proposed EMT can be treated as a strong
baseline for on-board multimodal tracking to simultane-
ously tackle above three issues. Thanks to the efficient mul-
timodal fusion and feature matching, our proposed tracker
can successfully balance the tight computational budget and
tracking accuracy. We perform extensive experiments in di-
verse scenarios and various platforms to validate the effec-
tiveness of our EMT. Competitive tracking performance is
observed in comparison with state-of-the-art RGB-only and
multimodal trackers, in which EMT runs at a high frame
rate of over 100 FPS. Practical application tests are given
onNVIDIA Jetson NX Xavier , where our EMT can run at a
frame rate of over 25 FPS. To conclude, our dataset covers
complex aerial tracking scenarios and our method shows a
promising balance of accuracy, resources and speed.
The contributions are summarised below:
•New Problem: We propose a new task of RGBD air
tracking for newly defined overhead space (2m - 5m).
Unlike previous aerial tracking, this task is more rele-
vant to human life and has wider applications.
•New Benchmark: We construct a large-scale high-
diversity benchmark for RGBD aerial tracking. The
advantage is that much more categories (34 classes)
can be considered than existing aerial tracking
datasets. As far as we know, this is the first dataset
that can test multimodal aerial tracking models.•New Baseline: An efficient tracking baseline is pro-
posed for RGBD aerial tracking, which is the first real-
time tracker for efficient on-board multimodal track-
ing. It performs better than classical UA V trackers and
maintains comparable efficiency.
|
Zhang_Two-Stage_Co-Segmentation_Network_Based_on_Discriminative_Representation_for_Recovering_Human_CVPR_2023 | Abstract
Recovering 3D human mesh from videos has recently
made significant progress. However , most of the existingmethods focus on the temporal consistency of videos, while
ignoring the spatial representation in complex scenes, thusfailing to recover a reasonable and smooth human meshsequence under extreme illumination and chaotic back-
grounds. To alleviate this problem, we propose a two-
stage co-segmentation network based on discriminative rep-
resentation for recovering human body meshes from videos.
Specifically, the first stage of the network segments the videospatial domain to spotlight spatially fine-grained informa-tion, and then learns and enhances the intra-frame discrimi-
native representation through a dual-excitation mechanism
and a frequency domain enhancement module, while sup-
*represents corresponding author,†represents the equal contribution.pressing irrelevant information (e.g., background). The sec-ond stage focuses on temporal context by segmenting the
video temporal domain, and models inter-frame discrimina-
tive representation via a dynamic integration strategy. Fur-ther , to efficiently generate reasonable human discrimina-tive actions, we carefully elaborate a landmark anchor area
loss to constrain the variation of the human motion area.
Extensive experimental results on large publicly available
datasets indicate superiority in comparison with most state-of-the-art. The Code will be made public.
| 1. Introduction
3D human mesh recovery from images and videos has
been widely concerned in recent years. Existing methodsfor estimating human pose and shape from a single im-age are based on parametric human models such as SMPL
[17] etc, which takes a set of model parameters as input
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5662
and finally outputs a human body mesh. These methods
capture the statistical information on human body shapeand provide human body mesh for various applications.While these methods recover body mesh from a single im-age [ 3,12,15,16] can accurately predict human pose, they
may be jittery and intermittent when applied to videos.
The reason for this problem is that the body pose is in-consistent over successive frames and does not reflect thebody’s motion in the rapidly changing complex scenes ofthe video. This thus leads to temporal non-smoothness andspatial non-accuracy. Several approaches [ 5,7,12,14,22]
have been proposed to efficiently extend single image-basedmethods to video. They utilize different temporal encoders
to learn the temporal representation directly from videos tobetter capture temporal information. However, these meth-
ods only encode spatial features, ignoring the effective uti-
lization of spatial fine-grained features and human motion
discriminative features. Therefore, it fails to recover a rea-sonable and smooth human sequence in chaotic and extreme
illumination scenes. For example, TCMR [ 5] recovers the
unsatisfactory motion on the left arm of the actor in Figure1in complex scenes.
The background and the human in spatial features have
a complex relationship. When spatial features are input tothe network, it is difficult for the network to distinguish be-tween the human body and the background. At the same
time, this relationship is not conducive to our discovery offine-grained and discriminable features. Specifically, in ex-
treme illumination and chaotic scenes, messy background
severely interferes with human details and movement infor-
mation, thus the network cannot reason about accurate hu-man detail features in complex scenes and lacks the ability
to discriminate reasonable human movements. We considerboth intra-frame and inter-frame multi-level spatial repre-
sentations are ideal cues to efficiently reason about spa-tial fine-grained information and temporal contextual dis-
criminative information. In addition, learning to repre-
sent features at different stages is expected to strengthenthe model to strip away the complex background and findhuman-separable motion features, thereby further improv-
ing human-specific discriminative capabilities.
Based on the above perspectives, we propose a two-stage
co-segmentation network based on discriminative represen-
tation for recovering human mesh from videos. In contrast
to previous approaches using common spatial features for
encoding temporal features, we attempt to segment spatialfeatures into distinct hierarchical of spatial representations
and process them separately in different stages. Specifi-cally, the network learns and models intra-frame and inter-
frame multi-level discriminative representations by seg-
menting spatial features along feature channels and tempo-
ral dimensions in two stages. In the first stage of the intra-frame discriminative representation, we design a dual exci-tation mechanism that combines self-excitation and channel
excitation mechanism to simulate and activate human mo-tion while attenuating the interferences of complex back-grounds. In addition, we design a frequency domain en-
hancement module to capture motion information that canhighlight motion features in the frequency domain. In the
second stage of inter-frame discriminative representation,we offer a new discriminative representation: the superposi-tion of fragments, which enhances the spatio-temporal rep-
resentation of past and future frames by a dynamic integra-
tion strategy, while modeling the discriminative represen-tation of the temporal context. Furthermore, to ensure the
integrity and plausibility of discriminative motion represen-
tation in consecutive frames, we also carefully design a newlandmark anchor area loss to optimize the network, thereby
further helping the model to reconstruct accurate 3D human
actions and poses.
The core contributions of our work are as follows:
• We present a co-segmentation network based on dis-
criminative representation for recovering human meshfrom videos. Our method motivates and learns spatio-
temporal discriminative features at different stages.
• In Stage 1, our proposed dual excitation mechanism
and frequency domain enhancement effectively en-hance human motion features and mitigate background
interference. In Stage 2, we develop a dynamic inte-gration strategy to integrate the discriminative repre-sentations of distinct stages. We also carefully design
a landmark anchor area loss to constrain the generationof the reasonable pose.
• Both the quantitative and qualitative results of our
method show the effectiveness of the proposed method
on widely evaluated benchmark datasets in comparisonwith state-of-the-arts.
|
Xu_MV-JAR_Masked_Voxel_Jigsaw_and_Reconstruction_for_LiDAR-Based_Self-Supervised_Pre-Training_CVPR_2023 | Abstract
This paper introduces the Masked Voxel Jigsaw and
Reconstruction (MV-JAR) method for LiDAR-based self-
supervised pre-training and a carefully designed data-
efficient 3D object detection benchmark on the Waymo
dataset. Inspired by the scene-voxel-point hierarchy in
downstream 3D object detectors, we design masking and re-
construction strategies accounting for voxel distributions in
the scene and local point distributions within the voxel. We
employ a Reversed-Furthest-Voxel-Sampling strategy to ad-
dress the uneven distribution of LiDAR points and propose
MV-JAR, which combines two techniques for modeling the
aforementioned distributions, resulting in superior perfor-
mance. Our experiments reveal limitations in previous data-
efficient experiments, which uniformly sample fine-tuning
splits with varying data proportions from each LiDAR se-
quence, leading to similar data diversity across splits. To
address this, we propose a new benchmark that samples
scene sequences for diverse fine-tuning splits, ensuring ad-
equate model convergence and providing a more accu-
rate evaluation of pre-training methods. Experiments on
our Waymo benchmark and the KITTI dataset demonstrate
that MV-JAR consistently and significantly improves 3D
detection performance across various data scales, achiev-
ing up to a 6.3% increase in mAPH compared to training
from scratch. Codes and the benchmark are available at
https://github.com/SmartBot-PJLab/MV-JAR .
| 1. Introduction
Self-supervised pre-training has gained considerable at-
tention, owing to its exceptional performance in visual rep-
resentation learning. Recent advancements in contrastive
Corresponding author.0 5 10 15 20 25
Epoch596061626364656667L2 mAPHAccelerating Convergence
MV-JAR
Random
0 10 20 30 40 50
Data (%)40444852566064L2 mAPHData-Efficient Results
MV-JAR
Random
Figure 1. 3D object detection results on the Waymo dataset. Our
MV-JAR pre-training accelerates model convergence and greatly
improves the performance with limited fine-tuning data.
learning [ 4,6,8,16,45] and masked autoencoders [ 1,7,15,
36,47] for images have sparked interest among researchers
and facilitated progress in modalities such as point clouds.
However, LiDAR point clouds differ from images and
dense point clouds obtained by reconstruction as they are
naturally sparse, unorganized, and irregularly distributed.
Developing effective self-supervised proxy tasks for these
unique properties remains an open challenge. Construct-
ing matching pairs for contrastive learning in geometry-
dominant scenes is more difficult [ 20,40], as points or re-
gions with similar geometry may be assigned as negative
samples, leading to ambiguity during training. To address
this, our study explores masked voxel modeling paradigms
for effective LiDAR-based self-supervised pre-training.
Downstream LiDAR-based 3D object detectors [ 12,19,
33,38,41,50] typically quantize the 3D space into vox-
els and encode point features within them. Unlike pix-
els, which are represented by RGB values, the 3D space
presents a scene-voxel-point hierarchy, introducing new
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13445
challenges for masked modeling. Inspired by this, we de-
sign masking and reconstruction strategies that consider
voxel distributions in the scene and local point distributions
in the voxel. Our proposed method, Masked V oxel Jigsaw
And Reconstruction (MV-JAR), harnesses the strengths of
both voxel and point distributions to improve performance.
To account for the uneven distribution of LiDAR points,
we first employ a Reversed-Furthest-V oxel-Sampling (R-
FVS) strategy that samples voxels to mask based on their
sparseness. This approach prevents masking the furthest
distributed voxels, thereby avoiding information loss in re-
gions with sparse points. To model voxel distributions,
we propose Masked V oxel Jigsaw (MVJ), which masks
the voxel coordinates while preserving the local shape of
each voxel, enabling scene reconstruction akin to solving
a jigsaw puzzle. For modeling local point distributions,
we introduce Masked V oxel Reconstruction (MVR), which
masks all coordinates of points within the voxel but retains
one point as a hint for reconstruction. Combining these two
methods enhances masked voxel modeling.
Our experiments indicate that existing data-efficient ex-
periments [ 20,40] inadequately evaluate the effectiveness
of various pre-training methods. The current benchmarks,
which uniformly sample frames from each data sequence
to create diverse fine-tuning splits, exhibit similar data di-
versity due to the proximity of neighboring frames in a
sequence [ 3,14,30]. Moreover, these experiments train
models for the same number of epochs across different
fine-tuning splits, potentially leading to incomplete conver-
gence. As a result, the benefits of pre-trained representa-
tions become indistinguishable across splits once the ob-
ject detector is sufficiently trained on the fine-tuning data.
To address these shortcomings, we propose sampling scene
sequences to form diverse fine-tuning splits and establish
a new data-efficient 3D object detection benchmark on the
Waymo [ 30] dataset, ensuring sufficient model convergence
for a more accurate evaluation.
We employ the Transformer-based SST [ 12] as our de-
tector and pre-train its backbone for downstream detec-
tion tasks. Comprehensive experiments on the Waymo
and KITTI [ 14] datasets demonstrate that our pre-training
method significantly enhances the model’s performance and
convergence speed in downstream tasks. Notably, it im-
proves detection performance by 6.3% mAPH when using
only 5% of scenes for fine-tuning and reduces training time
by half when utilizing the entire dataset (Fig. 1). With the
representation pre-trained by MV-JAR, the 3D object de-
tectors pre-trained on Waymo also exhibit generalizability
when transferred to KITTI.
|
Zhang_Learning_Debiased_Representations_via_Conditional_Attribute_Interpolation_CVPR_2023 | Abstract
An image is usually described by more than one attribute
like “shape” and “color”. When a dataset is biased, i.e.,
most samples have attributes spuriously correlated with the
target label, a Deep Neural Network (DNN) is prone to
make predictions by the “unintended” attribute, especially
if it is easier to learn. To improve the generalization abil-
ity when training on such a biased dataset, we propose a
χ2-model to learn debiased representations. First, we de-
sign a χ-shape pattern to match the training dynamics of
a DNN and find Intermediate Attribute Samples (IASs) —
samples near the attribute decision boundaries, which in-
dicate how the value of an attribute changes from one ex-
treme to another. Then we rectify the representation with a
χ-structured metric learning objective. Conditional interpo-
lation among IASs eliminates the negative effect of periph-
eral attributes and facilitates retaining the intra-class com-
pactness. Experiments show that χ2-model learns debiased
representation effectively and achieves remarkable improve-
ments on various datasets. Code is available at: https:
//github.com/ZhangYikaii/chi-square
| 1. Introduction
Deep neural networks (DNNs) have emerged as an epoch-
making technology in various machine learning tasks with
impressive performance [5,26]. In some real applications, an
object may possess multiple attributes, and some of them are
only spuriously correlated to the target label. For example,
in Figure 1, the intrinsic attribute of an image annotated by
“lifeboats” is its shape . Although there are many lifeboats
colored orange, a learner can not make predictions through
thecolor ,i.e., there is a misleading correlation from attribute
asone containing “orange” color is the target “lifeboats” .
When the major training samples can be well discerned by
such peripheral attribute, especially learning on it is easier
than on the intrinsic one, a DNN is prone to bias towards
that “unintended” bias attribute [6, 11, 21, 43, 47, 48, 51],
like recognizing a “cyclist” wearing orange as a “lifeboat”.
Similar spurious attribute also exists in various applications
97.8% lifeboat(a) Orange lifeboat救生艇_0.5703651905059814 -串联自行车 _0.1379992961883545 -玩具店_0.08981618285179138 -山地自行车_0.04711327701807022 -橄榄球头盔
_0.03929492458701134
57.0% lifeboat
13.8% bicycle -built-for-two
8.98% toyshop
47.1% amphibian
28.1% lifeboat
18.1% speedboat独木舟_0.2981044054031372 -快艇 _0.19624267518520355 -水陆两用车 _0.17111073434352875 -船桨
_0.13951753079891205- 湖边 _0.08463700115680695救生艇_0.9784454107284546 -灯塔 _0.004665153566747904 -集装箱船_0.003798476653173566 -码头
_0.003606501966714859- 消防船 _0.0030068345367908478
(b) Orange cyclists
(b) Green lifeboat
0.5% beacon
0.4% container ship57.0% lifeboat
13.8% bicycle -built-for-two
9.0% toyshopFigure 1. Classification of a standard ResNet-50 of (a)an orange
lifeboat in the training set (with both color andshape attributes),
and(b)an orange cyclist for the test (aligned with color attribute
but conflicting with the shape one). Most of the lifeboats in the
training set are orange. The biased model is prone to predict via
the “unintended” color attribute rather than the intrinsic shape .
such as recommendation system [8, 35, 53, 59] and neural
language processing [13, 14, 33, 41, 56].
Given such a biased training dataset, how to get rid of
the negative effect of the misleading correlations? One in-
tuitive solution is to perform special operations on those
samples highly correlated to the bias attributes, which re-
quires additional supervision, such as the pre-defined bias
type [1, 4, 11, 12, 22, 30, 34, 46, 50]. Since prior knowledge
of the dataset bias requires expensive manual annotations
and is naturally missing in some applications, learning a
debiased model without additional supervision about bias
is in demand. Nam et al. [36] identify samples with intrin-
sic attributes based on the observation that malignant bias
attributes are often easier-to-learn than others. Then the valu-
able samples for a debiasing scheme could be dynamically
reweighted or augmented [11,27,34]. However, the restricted
number of such samples |
Yang_Global_Vision_Transformer_Pruning_With_Hessian-Aware_Saliency_CVPR_2023 | Abstract
Transformers yield state-of-the-art results across many
tasks. However, their heuristically designed architecture
impose huge computational costs during inference. This
work aims on challenging the common design philosophy of
the Vision Transformer (ViT) model with uniform dimension
across all the stacked blocks in a model stage, where we
redistribute the parameters both across transformer blocks
and between different structures within the block via the
first systematic attempt on global structural pruning. Deal-
ing with diverse ViT structural components, we derive a
novel Hessian-based structural pruning criteria comparable
across all layers and structures, with latency-aware regu-
larization for direct latency reduction. Performing iterative
pruning on the DeiT-Base model leads to a new architec-
ture family called NViT (Novel ViT), with a novel parameter
redistribution that utilizes parameters more efficiently. On
ImageNet-1K, NViT-Base achieves a 2.6×FLOPs reduction,
5.1×parameter reduction, and 1.9×run-time speedup over
the DeiT-Base model in a near lossless manner. Smaller
NViT variants achieve more than 1%accuracy gain at the
same throughput of the DeiT Small/Tiny variants, as well as
a lossless 3.3×parameter reduction over the SWIN-Small
model. These results outperform prior art by a large margin.
Further analysis is provided on the parameter redistribution
insight of NViT, where we show the high prunability of ViT
models, distinct sensitivity within ViT block, and unique
parameter distribution trend across stacked ViT blocks. Our
insights provide viability for a simple yet effective parameter
redistribution rule towards more efficient ViTs for off-the-
shelf performance boost.
| 1. Introduction
Transformer models demonstrate high model capacity,
easy scalability, and superior ability in capturing long-range
dependency [1, 9, 19, 30, 38]. Vision Transformer, i.e., the
ViT [12], shows that embedding image patches into tokens
and passing them through a sequence of transformer blocks
*Work done during an internship at NVIDIA.can lead to higher accuracy compared to state-of-the-art
CNNs. DeiT [35] further presents a data-efficient training
method such that acceptable accuracy can be achieved with-
out extensive pretraining. Offering competitive performance
to CNNs under similar training regimes, transformers now
point to the appealing perspective of solving both NLP and
vision tasks with the same architecture [18, 20, 49].
Unlike CNNs built with convolutional layers that con-
tain few dimensions like the kernel size and the number
of filters, the ViT has multiple distinct components, i.e.,
QKV projection, multi-head attention, multi-layer percep-
tron, etc. [38], each defined by independent dimensions. As
a result, the dimension of each component in each ViT block
needs to be carefully designed to achieve a decent trade-off
between efficiency and accuracy. However, this is typically
not the case for state-of-the-art models. Models such as
ViT [12] and DeiT [35] mainly inherit the design heuristics
from NLP tasks, e.g., use MLP expansion ratio 4 ,fix QKV
per head ,all the blocks having the same dimensions , etc.,
which may not be optimal for computer vision [4], caus-
ing significant redundancy in the base model and a worse
efficiency-accuracy trade-off upon scaling, as extensively
shown empirically. New developments in ViT architectures
incorporate additional design tricks like multi-stage archi-
tecture [41], more complicated attention schemes [23], and
additional convolutional layers [13] etc., yet no attempt has
been made on understanding the potential of redistributing
parameters within the stacked vision transformer blocks.
This work targets efficient ViTs by exploring parameter
redistribution within ViT blocks and across multiple layers of
cascading ViT blocks. To this end, we start with the straight-
forward DeiT design space, with only ViT blocks. We ana-
lyze the importance and redundancy of different components
in the DeiT model via latency-aware global structural prun-
ing, leveraging the insights to redistribute parameters for
enhanced accuracy-efficiency trade-off. Our approach, as
visualized in Fig. 1, starts from analyzing the blocks in the
computation graph of ViT to identify all the dimensions that
can be independently controlled. We apply global structural
pruning over all the components in all blocks. This offers
complete flexibility to explore their combinations towards
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18547
MLP
headC classes
QQQPrunable weights
QQKQQVPROJPROJProj.FC1 FC2
ExQKxHExQKxHE x VVxExHExMMxEVision T ransformer
Prunable Components Analysis NViTInput tokensNViT Block
NViT Block
NViT BlockNViT BlockNViT BlockNViT BlockMLP
headC classes
Pruned weights
FC10.7Ex0.6QK x0.5H0.7Ex0.9Vx0.5H0.9Vx0.7Ex0.5H0.7Ex0.6M0.6M x0.7E
Global Importance RankingBlocks / LayersImportanceThres. FC1FC2
PROJProj.
QQ KK VVParameter RedistributionBlocks LayersDimension
Pruned modelTrendInput tokensH heads
Ex(VxH)Q
K
VEx(QKxH)
khqh
vhkhqh
vhkhqh
vh
LayerNorm
Input
tokens
NxEProj.
(VxH)xEMultihead Self Attention (MSA)
Output
tokens
NxEConcat
splitEx(QKxH)
Transformer Blocks x12
ExM MxEMulti-layer Perceptron (MLP)FC 2
LayerNormFC 1
0.7Ex0.6QK x0.5H
Global Structural PruningTowards Efficient InferenceFigure 1. Towards efficient vision transformer models. Starting form ViT, specifically DeiT, we identify the design space of pruning (i)
embedding size E, (ii) number of head H, (iii) query/key size QK, (iv) value size V and (v) MLP hidden dimension M in Sec. 3.1. Then we
utilize a global ranking of latency-aware importance score to perform iterative global structural pruning in Sec. 3.2, achieving pruned NViT
models. Finally we analyze the parameter redistribution trend of all the components in the NViT model, as in Sec. 5.1.
an optimal architecture in a complicated design space. Per-
forming global pruning on ViT is significantly challenging,
given the diverse structural components and significant mag-
nitude differences. Previous methods only attempts on per-
component pruning with the same pruning ratio [5], which
cannot lead to parameter redistribution across components
and blocks. We derive a new importance score based on the
Hessian matrix norm of the loss for global structural pruning,
for the first time offering comparability among all prunable
components. Furthermore, we incorporate the estimated la-
tency reduction into the importance score. This guides the
final pruned architecture to be faster on target devices.
The iterative structural pruning of the DeiT-Base model
enables a family of efficient ViT models: NViT. On the
ImageNet-1K benchmark [33], NViT enables a nearly loss-
less 5.14 ×parameter reduction, 2.57 ×FLOPs reduction
and 1.86 ×speed up on V100 GPU over the DeiT-Base
model. An 1% and 1.7% accuracy gain is observed over
DeiT-Small and DeiT-Tiny models when we scale down the
NViT to a similar latency. NViT achieves a further 1.8 ×
FLOPs reduction and an 1.5 ×speedup over NAS-based Aut-
oFormer [4] (ICCV’21) and the SOTA structural pruning
method S2ViTE [5] (NeurIPS’21).The efficiency and perfor-
mance benefit of NViT trained on ImageNet also transfers to
downstream classification and segmentation tasks.
Using structural pruning for architectural guidance, we
further make an important observation that the popular uni-
form distribution of parameters across all layers is, in fact,
not optimal. To this end, we present further empirical andtheoretical analysis on the new parameter distribution rule
of efficient ViT architectures, which provides a new angle
on understanding the learning dynamic of vision transformer
model. We believe our findings would inspire future design
of efficient ViT architectures.
Our main contributions are as follows:
•Propose NViT, a novel hardware-friendly global struc-
tural pruning algorithm enabled by a latency-aware ,
Hessian-based importance-based criteria and tailored
towards the ViT architecture, achieving a nearly loss-
less 1.9 ×speedup, significantly outperforms SOTA ViT
compression methods and efficient ViT designs;
•Provide a systematic analysis on the prunable compo-
nents in the ViT model. We perform structural pruning
on the embedding dimension, number of heads, MLP
hidden dimension, QK dimension and V dimension of
each head separately;
•Explore hardware-friendly parameter redistribution of
ViT, finding high prunability of ViT models, distinct
sensitivity within ViT block, and unique parameter
distribution trend across stacked ViT blocks.
|
Zhang_Complete-to-Partial_4D_Distillation_for_Self-Supervised_Point_Cloud_Sequence_Representation_Learning_CVPR_2023 | Abstract
Recent work on 4D point cloud sequences has attracted
a lot of attention. However, obtaining exhaustively labeled
4D datasets is often very expensive and laborious, so it is
especially important to investigate how to utilize raw unla-
beled data. However, most existing self-supervised point
cloud representation learning methods only consider ge-
ometry from a static snapshot omitting the fact that se-
quential observations of dynamic scenes could reveal more
comprehensive geometric details. To overcome such is-
sues, this paper proposes a new 4D self-supervised pre-
training method called Complete-to-Partial 4D Distillation.
Our key idea is to formulate 4D self-supervised represen-
tation learning as a teacher-student knowledge distilla-
tion framework and let the student learn useful 4D repre-
sentations with the guidance of the teacher. Experiments
show that this approach significantly outperforms previ-
ous pre-training approaches on a wide range of 4D point
cloud sequence understanding tasks. Code is available at:
https://github.com/dongyh20/C2P.
| 1. Introduction
Recently, there is a surge of interest in understanding
point cloud sequences in 4D (3D space + 1D time) [7, 8,
11, 21, 30]. As the direct sensor input for a wide range
of applications including robotics and augmented reality,
point cloud sequences faithfully depict a dynamic environ-
ment regarding its geometric content and object movements
in the context of the camera ego-motion. Though widely
accessible, such 4D data is prohibitively expensive to an-
notate in large scale with fine details. As a result, there
is a strong need for leveraging the colossal amount of un-
labeled sequences. Among the possible solutions, self-
supervised representation learning has shown its effective-
ness in a wide range of fields including images [2, 12, 13],
videos [6, 9, 15, 22, 28] and point clouds [16, 24, 31, 33, 34].
*Equal contribution with the order determined by rolling dice.
†Corresponding author.
Figure 1. We propose a complete-to-partial 4D distillation (C2P)
approach. Our key idea is to formulate 4D self-supervised rep-
resentation learning as a teacher-student knowledge distillation
framework in which students learn useful 4D representations un-
der the guidance of a teacher. The learned features can be trans-
ferred to a range of 4D downstream tasks.
We therefore aim to fill in the absence of self-supervised
point cloud sequence representation learning in this work.
Learning 4D representations in a self-supervised man-
ner seems to be a straightforward extension of 3D cases.
However, a second thought reveals its challenging nature
since such representations need to unify the geometry and
motion information in a synergetic manner. From the ge-
ometry aspect, a 4D representation learner needs to under-
stand 3D geometry in a dynamic context. However, most
existing self-supervised point cloud representation learn-
ing methods [16, 24, 34] only consider geometry from a
static snapshot, omitting the fact that sequential observa-
tions of dynamic scenes could reveal more comprehensive
geometric details. From the motion aspect, a 4D represen-
tation learner needs to understand motion in the 3D space,
which requires an accurate cross-time geometric associa-
tion. Nevertheless, existing video representation learning
frameworks [6, 9, 22] mostly model motion as image space
flows so geometric-aware motion cues are rarely encoded.
Due to such challenges, 4D has been rarely discussed in the
self-supervised representation learning literature, with only
a few works [4, 23] designing learning objectives in 4D to
facilitate static 3D scene understanding.
To address the above challenges, we examine the na-
ture of 4D dynamic point cloud sequences, and draw two
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17661
main observations. First, most of a point cloud sequence
depicts the same underlying 3D content with an optional
dynamic motion. Motion understanding could help aggre-
gate temporal observations to form a more comprehensive
geometric description of the scene. Second, geometric cor-
respondences across time could help estimate the relative
motion between two frames. Therefore better geometric un-
derstanding should facilitate a better motion estimation. At
the core are the synergetic nature of geometry and motion.
To facilitate the synergy of geometry and motion, we de-
velop a Complete-to-Partial 4D Distillation (C2P) method.
Our key idea is to formulate 4D self-supervised represen-
tation learning as a teacher-student knowledge distillation
framework and let the student learn useful 4D representa-
tions with the guidance of the teacher. And we present a
unified solution to the following three questions: How to
teach the student to aggregate sequential geometry for more
complete geometric understanding leveraging motion cues?
How to teach the student to predict motion based upon bet-
ter geometric understanding? How to form a stable and
high-quality teacher?
In particular, our C2P method consists of three key de-
signs. First, we design a partial-view 4D sequence genera-
tion method to convert an input point cloud sequence which
is already captured partially into an even more partial se-
quence. This is achieved by conducting view projection of
input frames following a generated camera trajectory. The
generated partial 4D sequence allows bootstrapping multi-
frame geometry completion. This is achieved by feeding
the input sequence and the generated partial-view sequence
to teacher and student networks respectively and distill the
teacher knowledge to a 4D student network. Second, the
student network not only needs to learn completion by mim-
icking the corresponding frames of the teacher network, but
also needs to predict the teacher features of other frames
within a time window, to achieve so-called 4D distillation.
Notice the teacher feature corresponds to more complete ge-
ometry, which also encourages the student to exploit the
benefit of geometry completion in motion prediction. Fi-
nally, we design an asymmetric teacher-student distillation
framework for stable training and high-quality representa-
tion, i.e., the teacher network has weaker expressivity com-
pared with the student but is easier to optimize.
We evaluate our method on three downstream tasks
including indoor and outdoor scenarios: 4D action seg-
mentation on HOI4D [21], 4D semantic segmentation on
HOI4D [21], 4D semantic segmentation on Synthia 4D [27]
and 3D action recognition on MSR-action3D [17]. We
demonstrate significant improvements over the previous
method( +2.5%accuracy on HOI4D action segmentation,
+1.0%mIoU on HOI4D semantic segmentation, +1.0%
mIoU on Synthia 4D semantic segmentation and +2.1%ac-
curacy on MSR-Action3D).The contributions of this paper are fourfold: First, we
propose a new 4D self-supervised representation learning
method named Complete-to-Partial 4D Distillation which
facilitates the synergy of geometry and motion learning.
Second, we propose a natural and effective way to generate
partial-view 4D sequences and demonstrate that it can work
well as learning material for knowledge distillation. Third,
we find that asymmetric design is crucial in the complete-
to-partial knowledge distillation process and we propose a
new asymmetric distillation architecture. Fourth, extensive
experiments on three tasks show that our method outper-
forms previous state-of-the-art methods by a large margin.
|
Yu_Boost_Vision_Transformer_With_GPU-Friendly_Sparsity_and_Quantization_CVPR_2023 | Abstract
The transformer extends its success from the language to
the vision domain. Because of the stacked self-attention and
cross-attention blocks, the acceleration deployment of vi-
sion transformer on GPU hardware is challenging and also
rarely studied. This paper thoroughly designs a compres-
sion scheme to maximally utilize the GPU-friendly 2:4 fine-
grained structured sparsity and quantization . Specially, an
original large model with dense weight parameters is first
pruned into a sparse one by 2:4 structured pruning, which
considers the GPU’s acceleration of 2:4 structured sparse
pattern with FP16 data type, then the floating-point sparse
model is further quantized into a fixed-point one by sparse-
distillation-aware quantization aware training, which con-
siders GPU can provide an extra speedup of 2:4 sparse cal-
culation with integer tensors. A mixed-strategy knowledge
distillation is used during the pruning and quantization pro-
cess. The proposed compression scheme is flexible to sup-
port supervised and unsupervised learning styles. Exper-
iment results show GPUSQ-ViT scheme achieves state-of-
the-art compression by reducing vision transformer models
6.4-12.7 ×on model size and 30.3-62 ×on FLOPs with neg-
ligible accuracy degradation on ImageNet classification,
COCO detection and ADE20K segmentation benchmarking
tasks. Moreover, GPUSQ-ViT can boost actual deployment
performance by 1.39-1.79 ×and3.22-3.43 ×of latency and
throughput on A100 GPU, and 1.57-1.69 ×and2.11-2.51 ×
improvement of latency and throughput on AGX Orin.
| 1. Introduction
Transformer-based neural models [48] have garnered im-
mense interest recently due to their effectiveness and gen-
eralization across various applications. Equipped with the
attention mechanism [52] as the core of its architecture,
transformer-based models specialize in handling long-range
dependencies, which are also good at extracting non-local
*Tao Chen and Zhongxue Gan are corresponding authors.features [9] [5] in the computer vision domain. With com-
parable and even superior accuracy than the traditional con-
volution neural networks (CNN) [12] [49], more vision
transformer models are invented and gradually replace the
CNN with state-of-the-art performance on image classifi-
cation [27] [26], object detection [70] [59], and segmenta-
tion [58] [68] tasks. Due to the vision transformer models
having a generally weaker local visual inductive bias [9] in-
herent in CNN counterparts, many transformer blocks are
stacked for compensation. Moreover, the attention module
in the transformer block contains several matrix-to-matrix
calculations between key, query, and value parts [52]. Such
designs give the naive vision transformers more parameters
and higher memory and computational resource require-
ments, causing high latency and energy consuming during
the inference stage. It is challenging for actual acceleration
deployment in GPU hardware .
Model compression techniques to transfer the large-scale
vision transformer models to a lightweight version can
bring benefits to more efficient computation with less on-
device memory and energy consumption. There are some
previous studies to inherit CNN compression methods, in-
cluding pruning [43] [15], quantization [28] [23], distilla-
tion [61], and architecture search [6] on vision transformers.
However, there are some drawbacks in previous studies:
• Most of these common methods aim to reduce the
theoretical model size and Floating Point Operations
(FLOPs). But it has been proved [33] [37] that smaller
model sizes and FLOPs are not directly proportional to
better efficiency on deployed hardware.
• The compression patterns do not match hardware char-
acteristics. For example, pruned [43] or searched [6]
vision transformer models have the unstructured sparse
pattern in weight parameters, i.e., the distribution of
non-zero elements is random. So deployed hardware
can not provide actual speedup due to lacking the char-
acteristics support for unstructured sparsity [35].
• How to keep the accuracy to the best with multiple
compression methods and how to generalize on mul-
tiple vision tasks lack systematical investigation.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22658
Sparse M✕N✕K GEMMDense M✕N✕K GEMMKA matrix (Dense)☓Accumulator (result)NDense operation on Tensor Core
MB matrix (Dense)
C matrix (Dense)MK
K/2A matrix (Sparse)Non-zero data values2-bits indicesK/2☓Accumulator (result)Sparse operation on Tensor CoreSelectB matrix (Dense)
C matrix (Dense)NChoose matching K/2 elements out of K elements
MMKFigure 1. Comparison of computing a M×N×KGEMM onto a
Tensor Core. Dense matrix A of size M×Kinleft side becomes
M×K
2inright side after compressing with 2:4 fine-grained
structured sparse pattern . Sparse Tensor Core automatically picks
only the elements from B according to the nonzero elements in A.
Comparing the dense and sparse GEMMs, B and C are the same
dense K×NandM×Nmatrices, respectively. By skipping the
unnecessary multiplications of redundant zeros, sparse GEMM ac-
celerate the dense GEMM with 2×.
General Matrix Multiplication (GEMM) is the funda-
mental implementation inside the common parts of vision
transformers, such as convolution, linear projection, and
transformer blocks. A specific acceleration unit called
Tensor Core [39] is firstly introduced in NVIDIA V olta
GPU [34] to accelerate these GEMM instructions and
further enhanced to support sparse GEMM in Ampere
GPU [35]. To make the GPU hardware efficient for sparse
GEMM, a constraint named 2:4 fine-grained structured
sparsity [33] is imposed on the allowed sparsity pattern,
i.e., two values from every four contiguous elements on
rows must be zero. Due to the 2:4 sparsity support on GPU
Tensor Core hardware, sparse GEMM can reduce memory
storage and bandwidth by almost 2×and provide 2×math
throughput compared to dense GEMM by skipping the re-
dundant zero-value computation, as shown in Figure 1. Am-
pere GPU supports various numeric precision for 2:4 spar-
sity, including FP32, FP16, INT8, and INT4, etc.
Inspired by GPU’s acceleration characteristic for 2:4
fine-grained structured sparse pattern with various low-
precision operators, we thoroughly design the compres-
sion scheme GPUSQ-ViT by utilizing the GPU -friendly
Sparsity and Quantization to boost deployment efficacy for
Vision Transformer models, especially on GPU platforms.
GPUSQ-ViT contains two main workflows. Firstly, 2:4
sparse pruning with knowledge distillation [14] (KD) is pro-
posed to compress the specific structures in vision trans-
former architecture, e.g., transformer block, patch embed-
ding, to be GPU-friendly. Secondly, we further quantize
the sparse model through sparse-distillation-aware Quanti-
zation Aware Training [30] (QAT). To measure the influence
of quantization errors, we use the feature-based distillation
loss in the sparse pruning workflow as the weight factor.
The feature-based KD utilizes the scale factor in the quan-
tization compression workflow, which can best compensatefor the final compressed model’s accuracy. We demonstrate
thatGPUSQ-ViT can generally apply to vision transformer
models and benchmarking tasks, with state-of-the-art the-
oretical metrics on model size and FLOPs. Moreover, as
GPUSQ-ViT compresses with GPU-friendly patterns, the
compressed models can achieve state-of-the-art deployment
efficacy on GPU platforms. Our main contributions include:
• Unlike previous compression methods only aiming at
reducing theoretical metrics, we propose GPUSQ-ViT
from the perspective of GPU-friendly 2:4 sparse pat-
tern with low-precision quantization for the first time,
achieving GPU acceleration of 4 times than prior arts.
•GPUSQ-ViT combines feature-based KD with sparse
pruning and QAT, which can best compensate for
sparse and quantized models’ accuracy.
•GPUSQ-ViT can apply to various vision transformer
models and benchmarking tasks, with proven state-of-
the-art efficacy on model size, FLOPs, and actual de-
ployment performance on multiple GPUs. Moreover,
GPUSQ-ViT can work without ground truth label an-
notations in an unsupervised learning style.
|
Yang_IDGI_A_Framework_To_Eliminate_Explanation_Noise_From_Integrated_Gradients_CVPR_2023 | Abstract
Integrated Gradients (IG) as well as its variants are well-
known techniques for interpreting the decisions of deep neu-
ral networks. While IG-based approaches attain state-of-
the-art performance, they often integrate noise into their ex-
planation saliency maps, which reduce their interpretabil-
ity. To minimize the noise, we examine the source of
the noise analytically and propose a new approach to re-
duce the explanation noise based on our analytical find-
ings. We propose the Important Direction Gradient Integra-
tion (IDGI) framework, which can be easily incorporated
into any IG-based method that uses the Reimann Integra-
tion for integrated gradient computation. Extensive exper-
iments with three IG-based methods show that IDGI im-
proves them drastically on numerous interpretability met-
rics. The source code for IDGI is available at https:
//github.com/yangruo1226/IDGI .
| 1. Introduction
With the deployment of deep neural network (DNN)
models for safety-critical applications such as autonomous
driving [5–7] and medical diagnostics [10, 24], explaining
the decisions of DNNs has become a critical concern. For
humans to trust the decision of DNNs, not only the model
must perform well on the specified task, it also must gen-
erate explanations that are easy to interpret. A series of ex-
planation methods (e.g., gradient-based saliency/attribution
map approaches [21, 22, 29, 33, 36, 38, 43, 46] as well as
many that are not based on gradients [4, 11, 13, 19, 25, 27,
30, 32, 35, 39, 40, 42, 47, 48]) have been developed to con-
nect a DNN’s prediction to its input. Among them, In-
tegrated Gradients (IG) [43], a well-known gradient-based
explanation method, and its variants [22, 46] have attracted
significant interest due to their state-of-the-art explanation
performance and desirable theoretical properties. However,
we observe that explanation noise exists in the attribution
generated by these IG methods (please see Fig. 1). In this
research, we investigate IG-based methods, study the expla-
nation noise introduced by these methods, propose a frame-
Figure 1. Saliency/attribution map of the existing IG-based meth-
ods and those with our method on explaining the prediction from
InceptionV3 model. Our method significantly reduces the noise in
the saliency map created by these IG-based methods.
work to remove the explanation noise, and empirically val-
idate the effectiveness of our approach.
A few recent IG-based methods (e.g., [38] [22], [46],
[41]) have been proposed to address the noise issue.
Kapishnikov et al. [22] provide the following main reasons1
that could generate the noise: 1) DNN model’s shape often
has a high curvature; and 2) The choice of the reference
point impacts explanation. They propose Guided Integrated
Gradients (GIG) [22], which tackles point #1 by iteratively
finding the integration path that tries to avoid the high cur-
vature points in the space. Blur Integral Gradients [46], on
the other hand, shows that the noise could be reduced by
finding the integration path through the frequency domain
instead of the original image domain. Formally, it finds the
path by successively blurring the input via a Gaussian blur
filter. Sturmfels et al. [41] tackle point #2 by performing the
integration from multiple reference points, while Smilkov
et al. [38] aggregate the attribution with respect to multiple
Gaussian noisy inputs to reduce the noise. Nevertheless,
all IG-based methods share a common point in that they
compute the integration of gradients via the Riemann inte-
gral. We highlight that, the integration calculation by the
existing methods fundamentally introduces the explanation
noise. To this end, we offer a general solution that elimi-
1[22] mentions the accuracy of integration is also a reason to generate
the noise, but this is not the focus of existing IG methods and this paper.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23725
nates the noise by examining the integration directions from
the explanation perspective.
Specifically, we investigate each computation step in the
Riemann Integration and then theorize about the noises’ ori-
gin. Each Riemann integration calculation integrates the
gradient in the original direction —it first computes the gra-
dient with respect to the starting point of the current path
segment and then multiplies the gradient by the path seg-
ment. We show that the original direction can be divided
into an important direction and a noise direction . We theo-
retically demonstrate that the true gradient is orthogonal to
thenoise direction , resulting in the gradient’s multiplication
along the noise direction having no effect on the attribution.
Based on this observation, we design a framework, termed
Important Direction Gradient Integration (IDGI), that can
eliminate the explanation noise in each step of the compu-
tation in any existing IG method. Extensive investigations
reveal that IDGI reduces noise significantly when evaluated
using state-of-the-art IG-based methods.
In summary, our main contributions are as follows:
• We propose the Important Direction Gradient Integra-
tion (IDGI), a general framework to eliminate the ex-
planation noise in IG-based methods, and investigate
its theoretical properties.
• We propose a novel measurement for assessing the at-
tribution techniques’ quality, i.e., AIC and SIC using
MS-SSIM. We show that this metric offers a more pre-
cise measurement than the original AIC and SIC.
• Our extensive evaluations on 11 image classifiers with
3 existing and 1 proposed attribution assessment tech-
niques indicate that IDGI significantly improves the at-
tribution quality over the existing IG-based methods.
|
Zhang_3D_Registration_With_Maximal_Cliques_CVPR_2023 | Abstract
As a fundamental problem in computer vision, 3D point
cloud registration (PCR) aims to seek the optimal pose to
align a point cloud pair. In this paper, we present a 3D reg-
istration method with maximal cliques (MAC). The key in-
sight is to loosen the previous maximum clique constraint,
and mine more local consensus information in a graph for
accurate pose hypotheses generation: 1) A compatibility
graph is constructed to render the affinity relationship be-
tween initial correspondences. 2) We search for maximal
cliques in the graph, each of which represents a consensus
set. We perform node-guided clique selection then, where
each node corresponds to the maximal clique with the great-
est graph weight. 3) Transformation hypotheses are com-
puted for the selected cliques by the SVD algorithm and
the best hypothesis is used to perform registration. Ex-
tensive experiments on U3M, 3DMatch, 3DLoMatch and
KITTI demonstrate that MAC effectively increases registra-
tion accuracy, outperforms various state-of-the-art meth-
ods and boosts the performance of deep-learned methods.
MAC combined with deep-learned methods achieves state-
of-the-art registration recall of 95.7% / 78.9% on 3DMatch
/ 3DLoMatch.
| 1. Introduction
Point cloud registration (PCR) is an important and fun-
damental problem in 3D computer vision and has a wide
range of applications in localization [13], 3D object detec-
tion [17] and 3D reconstruction [25]. Given two 3D scans
of the same object (or scene), the goal of PCR is to estimate
a six-degree-of-freedom (6-DoF) pose transformation that
accurately aligns the two input point clouds. Using point-
to-point feature correspondences is a popular and robust so-
lution to the PCR problem. However, due to the limita-
tions of existing 3D keypoint detectors & descriptors, the
limited overlap between point clouds and data noise, corre-
*Corresponding author.
Code will be available at https://github.com/zhangxy0517/
3D-Registration-with-Maximal-Cliques .
(a) #corr: 5182,
inlier ratio: 8.97%(b) #corr in maximal clique: 4,
inlier ratio: 100%(c) RE=4.12° ,
TE=12.88cm
(b) #corr in maximum clique: 6,
inlier ratio: 0%(c) RE=12.59° ,
TE=63.04cm
Success
FailFigure 1. Comparison of maximal andmaximum cliques on a
low overlapping point cloud pair. Maximal cliques (MAC) effec-
tively choose the optimal 6-DoF transformation hypothesis with
low rotation error (RE) and translation error (TE) for two point
clouds with a low inlier ratio, while the maximum clique fails in
this case.
spondences generated by feature matching usually contain
outliers, resulting in great challenges to accurate 3D regis-
tration.
The problem of 3D registration by handling correspon-
dences with outliers has been studied for decades. We
classify them into geometric-only and deep-learned meth-
ods. For geometric-only methods [5, 6, 21, 30, 31, 38–41],
random sample consensus (RANSAC) and its variants per-
form an iterative sampling strategy for registration. Al-
though RANSAC-based methods are simple and efficient,
their performance is highly vulnerable when the outlier rate
increases, and it requires a large number of iterations to ob-
tain acceptable results. Also, a series of global registra-
tion methods based on branch-and-bound (BnB) are pro-
posed to search the 6D parameter space and obtain the op-
timal global solution. The main weakness of these meth-
ods is the high computational complexity, especially when
the correspondence set is of a large magnitude and has
an extremely high outlier rate. For deep-learned methods,
some [1–4, 9, 10, 14, 16, 18, 19, 27, 35] focus on improving
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17745
one module in the registration process, such as investigating
more discriminate keypoint feature descriptors or more ef-
fective correspondence selection techniques, while the oth-
ers [22, 29, 43] focus on registration in an end-to-end man-
ner. However, deep-learned based methods require a large
amount of data for training and usually lack generalization
on different datasets. At present, it is still very challenging
to achieve accurate registrations in the presence of heavy
outliers and in cross-dataset conditions.
In this paper, we propose a geometric-only 3D registra-
tion method based on maximal cliques (MAC). The key in-
sight is to loosen the previous maximum clique constraint,
and mine more local consensus information in a graph to
generate accurate pose hypotheses. We first model the ini-
tial correspondence set as a compatibility graph, where each
node represents a single correspondence and each edge be-
tween two nodes indicates a pair of compatible correspon-
dences. Second, we search for maximal cliques in the graph
and then use node-guided clique filtering to match each
graph node with the appropriate maximal clique containing
it.Compared with the maximum clique, MAC is a looser
constraint and is able to mine more local information in a
graph. This helps us to achieve plenty of correct hypothe-
ses from a graph. Finally, transformation hypotheses are
computed for the selected cliques by the SVD algorithm.
The best hypothesis is selected to perform registration us-
ing popular hypothesis evaluation metrics in the RANSAC
family. To summarize, our main contributions are as fol-
lows:
• We introduce a hypothesis generation method named
MAC. Our MAC method is able to mine more local
information in a graph, compared with the previous
maximum clique constraint. We demonstrate that hy-
potheses generated by MAC are of high accuracy even
in the presence of heavy outliers.
• Based on MAC, we present a novel PCR method,
which achieves state-of-the-art performance on U3M,
3DMatch, 3DLoMatch and KITTI datasets. Notably,
our geometric-only MAC method outperforms several
state-of-the-art deep learning methods [3, 9, 19, 27].
MAC can also be inserted as a module into multiple
deep-learned frameworks [1, 10, 18, 29, 43] to boost
their performance. MAC combined with GeoTrans-
former achieves the state-of-the-art registration recall
of95.7% / 78.9% on 3DMatch / 3DLoMatch.
|
Zeng_CLIP2_Contrastive_Language-Image-Point_Pretraining_From_Real-World_Point_Cloud_Data_CVPR_2023 | Abstract
Contrastive Language-Image Pre-training, benefiting
from large-scale unlabeled text-image pairs, has demon-
strated great performance in open-world vision understand-
ing tasks. However, due to the limited Text-3D data pairs,
adapting the success of 2D Vision-Language Models (VLM)
to the 3D space remains an open problem. Existing works
that leverage VLM for 3D understanding generally resort
to constructing intermediate 2D representations for the 3D
data, but at the cost of losing 3D geometry information.
To take a step toward open-world 3D vision understand-
ing, we propose Contrastive Language- Image-Point Cloud
Pretraining (CLIP2) to directly learn the transferable 3D
point cloud representation in realistic scenarios with a
novel proxy alignment mechanism. Specifically, we exploit
naturally-existed correspondences in 2D and 3D scenarios,
and build well-aligned and instance-based text-image-point
proxies from those complex scenarios. On top of that, we
propose a cross-modal contrastive objective to learn se-
mantic and instance-level aligned point cloud representa-
∗Equal contribution.1Huawei Noah’s Ark Lab2Hong Kong
University of Science and Technology3The Chinese University of Hong
Kong4Sun Yat-san University†Corresponding Author: xu.hang@
huawei.comtion. Experimental results on both indoor and outdoor sce-
narios show that our learned 3D representation has great
transfer ability in downstream tasks, including zero-shot
and few-shot 3D recognition, which boosts the state-of-the-
art methods by large margins. Furthermore, we provide
analyses of the capability of different representations in real
scenarios and present the optional ensemble scheme.
| 1. Introduction
Powerful 3D point cloud representation plays a cru-
cial role in various real-world applications, e.g., 3D object
recognition and detection [10, 20, 31, 40, 44]. Compared
to 2D images, 3D point cloud provides specific informa-
tion like accurate geometry that is robust to illumination
changes. However, current methods [25, 40] that learn 3D
representations generally rely on the predefined number of
object categories and require plenty of labor-intensive an-
notations. Those learned 3D representations are insufficient
for safety-critical scenarios like self-driving which includes
a long-tail class distribution far beyond the predefined tax-
onomy. Therefore, it is highly demanded to learn a transfer-
able 3D representation equipped with zero-shot recognition
ability in vocabulary scalable real-world scenes. Figure 1
shows an open-world recognition example by our CLIP2in
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15244
outdoor and indoor scenes, where the 3D objects can be
classified with the correlation alignment between 3D repre-
sentations and open-world vocabularies.
The critical ingredient of open-world understanding is
that the models learn sufficient knowledge to obtain general
representations. To achieve this, recent Vision-Language
Models (VLM) [14, 27, 38] leverage Internet-scale text-
image pairs to conduct vision-language pretraining, which
facilitates transferable 2D representation and demonstrates
promising performance in 2D open-vocabulary tasks. How-
ever, 3D vision-language pretraining remains unexplored
due to the limitation of existing 3D datasets in diversity and
scale compared to the massive data sources in 2D counter-
parts [14,15,27,38]. Though some recent works [12,13,43]
try to avoid this problem by transferring the pretrained 2D
VLM into the intermediate representation including pro-
jected image patches [12, 18] or depth maps [1, 43], those
representations suffer from the loss of 3D geometric infor-
mation and limited viewpoints under realistic scenarios. Es-
pecially the camera images are only sometimes available
due to the sensor failure in 3D scenes. We believe the 3D
representation based on original point cloud data retains
most information and is the optimal solution for 3D real
world understanding, which requires a rethink of learning
the transferable 3D representation under realistic scenarios.
To this end, we propose a Contrastive Language- Image-
Point cloud Pretraining framework, short for CLIP2, which
directly aligns 3D space with broader raw text and advances
the 3D representation learning into an open-world era.
Our learning process can be decomposed into two stages:
Firstly, we introduce a Triplet Proxy Collection to alleviate
the limitation of accessible pretraining data by construct-
ing language-image-point triplets from real-world scenes.
Since the large-scale realistic 3D datasets for outdoor driv-
ing [2,19] and indoor scenarios [9,32] are collected in open-
world, it contains huge amounts of realistic objects that
vary in semantics and diversity. Thus we consider them
as potential pretraining data sources without extra human
supervision. Specifically, we propose “Proxy” instances as
the bridges between language descriptions, 2D images and
3D point clouds. Enabled by a well-aligned VLM, a scal-
able caption list and the geometry transformation between
2D and 3D, we automatically create more than 1 million
triplets to facilitate pretraining. Secondly, we further pro-
pose a Cross-Modal Pretraining scheme to jointly optimize
the feature space alignments of three modalities, i.e.point
cloud, language and image. It contains both the contrastive
learning objective of semantic-level text-3D correlation and
instance-level image-3D correlation, which contributes to
better transferability of learned 3D representation.
We study the transferable capability of CLIP2by bench-
marking the zero-shot recognition performance on four pop-
ular indoor and outdoor real-world datasets, and find a sig-
nificant improvement over current methods, achieving Top1
accuracy 61.3% on SunRGBD [32], 43.8% on ScanNet [9]),28.8% on nuScenes [2] and 56.0% on ONCE [19]. For a
fair comparison with existing methods [1, 13, 36, 43], we
conduct zero-shot and few-shot classification on single ob-
ject dataset ScanObjectNN [34] and find consistent dom-
inance, 16.1% relative improvement on zero-shot classifi-
cation over previous state-of-the-art method [13]. To vali-
date the vocabulary-increasing ability of CLIP2, we report
the quantity results and visualizations to show the improved
discovery of the long-tail categories. Moreover, we make
ablations and analisis on different representations, and in-
vestigate ensembling alternatives to merge complementary
knowledge of all available representations in realistic appli-
cations. Our contributions can be summarized as follows:
• We propose a novel CLIP2framework that aligns 3D
space with open-world language representation, facili-
tating zero-shot transfer in realistic scenarios.
• We present a Triplet Proxies Collection scheme in real-
world scenes, which alleviates the shortage of text-3D
data sources and facilitates the pretraining methods.
• CLIP2jointly optimizes the correlation alignment be-
tween point cloud, language and image by proposed
cross-modal pretraining mechanism, which enhances
the transferability of learned 3D representation.
• Our CLIP2achieves the state-of-the-art zero-shot
transfer performance on 5 datasets (indoor/outdoor
scenes and single-object) and shows quality results on
vocabulary-increasing discovery in real world.
|
Yu_Foundation_Model_Drives_Weakly_Incremental_Learning_for_Semantic_Segmentation_CVPR_2023 | Abstract
Modern incremental learning for semantic segmentation
methods usually learn new categories based on dense anno-
tations. Although achieve promising results, pixel-by-pixel
labeling is costly and time-consuming. Weakly incremen-
tal learning for semantic segmentation (WILSS) is a novel
and attractive task, which aims at learning to segment new
classes from cheap and widely available image-level labels.
Despite the comparable results, the image-level labels can
not provide details to locate each segment, which limits the
performance of WILSS. This inspires us to think how to im-
prove and effectively utilize the supervision of new classes
given image-level labels while avoiding forgetting old ones.
In this work, we propose a novel and data-efficient frame-
work for WILSS, named FMWISS. Specifically, we propose
pre-training based co-segmentation to distill the knowl-
edge of complementary foundation models for generating
dense pseudo labels. We further optimize the noisy pseudo
masks with a teacher-student architecture, where a plug-
in teacher is optimized with a proposed dense contrastive
loss. Moreover, we introduce memory-based copy-paste
augmentation to improve the catastrophic forgetting prob-
lem of old classes. Extensive experiments on Pascal VOC
and COCO datasets demonstrate the superior performance
of our framework, e.g., FMWISS achieves 70.7% and 73.3%
in the 15-5 VOC setting, outperforming the state-of-the-art
method by 3.4% and 6.1%, respectively.
| 1. Introduction
Semantic segmentation is a fundamental task in com-
puter vision and has witnessed great progress using deep
learning in the past few years. It aims at assigning each
pixel a category label. Modern supervised semantic seg-
mentation methods [12, 14] are usually based on published
large-scale segmentation datasets with pixel annotations.
Despite the promising results, one model pre-trained on one
框架区别
Image“Horse”“a photo of Horse”FoundationModels(a) WILSS(b) FMWISS (Ours)Image-level label ofStep t<latexit sha1_base64="6EAb1KoGHnaQ8cLUAdO5OOMiX8M=">AAAC0HicjVHLSsNAFD2Nr1pfVZdugkVwVVIRdVnsxqWKVaGtMhmnOjh5mEzEUoq49Qfc6leJf6B/4Z0xBR+ITkhy5tx7zsy914+VTLXnvRSckdGx8YniZGlqemZ2rjy/cJhGWcJFk0cqSo59lgolQ9HUUitxHCeCBb4SR/5lw8SPrkWSyig80L1YdAJ2Hsqu5EwT1WkHTF9wpvqNwYk+LVe8qmeX+xPUclBBvnaj8jPaOEMEjgwBBEJowgoMKT0t1OAhJq6DPnEJIWnjAgOUSJtRlqAMRuwlfc9p18rZkPbGM7VqTqcoehNSulghTUR5CWFzmmvjmXU27G/efetp7tajv597BcRqXBD7l26Y+V+dqUWjiy1bg6SaYsuY6njuktmumJu7n6rS5BATZ/AZxRPC3CqHfXatJrW1m94yG3+1mYY1e57nZngzt6QB176P8yc4XKvWNqrre+uV+nY+6iKWsIxVmucm6tjBLprkfYUHPOLJ2XdunFvn7iPVKeSaRXxZzv07tbKUyQ==</latexit>Ct
<latexit sha1_base64="xvJmIIEAPIcDqbL6LREB13LnQvA=">AAAC6XicjVHLSsNAFD2Nr1pfVZduglVwY0lE1GXRjUsFqxVby2Q6bUPzIpkIUvoD7tyJW3/Arf6I+Af6F94Zo6hFdEKSM+fec2buvU7kuYm0rOecMTI6Nj6RnyxMTc/MzhXnF46TMI25qPLQC+OawxLhuYGoSld6ohbFgvmOJ06c3p6Kn1yIOHHD4EheRqLhs07gtl3OJFHN4krdZ7LLmdc/HZz35bo9MOs8jcxPem9wLpvFklW29DKHgZ2BErJ1EBafUEcLIThS+BAIIAl7YEjoOYMNCxFxDfSJiwm5Oi4wQIG0KWUJymDE9ujbod1Zxga0V56JVnM6xaM3JqWJVdKElBcTVqeZOp5qZ8X+5t3Xnupul/R3Mi+fWIkusX/pPjL/q1O1SLSxo2twqaZIM6o6nrmkuivq5uaXqiQ5RMQp3KJ4TJhr5UefTa1JdO2qt0zHX3SmYtWeZ7kpXtUtacD2z3EOg+ONsr1V3jzcLFV2s1HnsYRlrNE8t1HBPg5QJe8r3OMBj0bPuDZujNv3VCOXaRbxbRl3b+SdnnM=</latexit>Yt 1[CtCls.Loss
ImageStep tSeg.LossFigure 1. Illustration of the major difference of pipeline between
previous WILSS work and FMWISS. Given a model pre-trained
on old classes with pixel-level labels ( Yt−1), previous work [8]
learn new classes ( e.g.,horse ) via image-level labels ( Ct), while
FMWISS improves and effectively utilizes the supervision from
complementary foundation models.
dataset is prone to easily forget learned knowledge when be-
ing retrained on another dataset with new classes. This phe-
nomenon is known as catastrophic forgetting [37], which is
caused by large changes of model parameters to model new
samples with novel categories without accessing old sam-
ples.
A promising approach to solve such catastrophic for-
getting problem is called incremental learning. Many
methods have been proposed to solve image classification
task [7, 10, 17, 25, 28, 33, 41, 44, 46, 49, 50]. Recently, a
few methods have been presented to address incremental
learning for semantic segmentation (ILSS) task, where only
new classes of training samples of the current step are la-
beled with pixel annotations and old classes of the previ-
ous step are labeled as background. Modern ILSS methods
can be classified into two categories: regularization-based
and replay-based. Regularization-based methods [9, 18, 39]
focus on distilling knowledge, e.g., output probability, in-
termedia features, from pre-trained model of previous step.
Replay-based methods [36] propose to store the information
of previous old classes or web-crawled images and replay
for new training steps. However, a key barrier to further de-
velop these methods is the requirement for pixel-level an-
notations for new classes. Very recently, WILSON [8] first
proposes a new task, weakly incremental learning for se-
mantic segmentation (WILSS), to incrementally update the
model from image-level labels for new classes. Despite the
comparable results, the image-level labels can not provide
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23685
details to accurately locate each segment, which limits the
performance and development of WILSS.
In this work, we explore to improve and more effec-
tively utilize the supervision of new classes given image-
level labels while preserving the knowledge of old ones. We
propose a Foundation Model drives Weakly Incremental
learning for Semantic Segmentation framework, dubbed
FMWISS.
Firstly, as shown in Figure 1, we are the first attempt to
leverage pre-trained foundation models to improve the su-
pervision given image-level labels for WILSS in a training-
free manner. To be specific, we propose pre-training based
co-segmentation to distill the knowledge of vision-language
pre-training models ( e.g., CLIP [42]) and self-supervised
pre-training models ( e.g., iBOT [52]), which can be com-
plementary to each other. However, it is not trivial to apply
the pre-trained models. We first adapt CLIP for category-
aware dense mask generation. Based on the initial mask
for each new class, we then propose to extract compact
category-agnostic attention maps with seeds guidance us-
ing self-supervised models. We finally refine the pseudo
masks via mask fusion. We further propose to optimize
the still noisy pseudo masks with a teacher-student archi-
tecture, where the plug-in teacher is optimized with the pro-
posed dense contrastive loss. Thus we can more effectively
utilize the pseudo dense supervision. Finally, we present
memory-based copy-paste augmentation to remedy the for-
getting problem of old classes and can further improve the
performance.
The contributions of this paper are as follows:
• We present a novel and data-efficient WILSS frame-
work, called FMWISS, which is the first attempt to uti-
lize complementary foundation models to improve and
more effectively use the supervision given only image-
level labels.
• We propose pre-training based co-segmentation to
generate dense masks by distilling both category-
aware and category-agnostic knowledge from pre-
trained foundation models, which provides dense su-
pervision against original image labels.
• To effectively utilize pseudo labels, we use a teacher-
student architecture with a proposed dense contrastive
loss to dynamically optimize the noisy pseudo labels.
• We further introduce memory-based copy-paste aug-
mentation to remedy the forgetting problem of old
classes and can also improve performance.
• Extensive experiments on Pascal VOC and COCO
datasets demonstrate the significant efficacy of our
FMWISS framework.
|
Yang_Object_Pose_Estimation_With_Statistical_Guarantees_Conformal_Keypoint_Detection_and_CVPR_2023 | Abstract
The two-stage object pose estimation paradigm first de-
tects semantic keypoints on the image and then estimates
the 6D pose by minimizing reprojection errors. Despite per-
forming well on standard benchmarks, existing techniques
offer no provable guarantees on the quality and uncertainty
of the estimation. In this paper, we inject two fundamental
changes, namely conformal keypoint detection andgeomet-
ric uncertainty propagation , into the two-stage paradigm
and propose the first pose estimator that endows an estima-
tion with provable and computable worst-case error bounds .
On one hand, conformal keypoint detection applies the sta-
tistical machinery of inductive conformal prediction to con-
vert heuristic keypoint detections into circular or elliptical
prediction sets that cover the groundtruth keypoints with a
user-specified marginal probability ( e.g.,90%). Geometric
uncertainty propagation, on the other, propagates the ge-
ometric constraints on the keypoints to the 6D object pose,
leading to a Pose UnceRtainty SEt ( PURSE )that guarantees
coverage of the groundtruth pose with the same probabil-
ity. The PURSE , however, is a nonconvex set that does not
directly lead to estimated poses and uncertainties. There-
fore, we develop RANdom SAmple averaGing ( RANSAG )
to compute an average pose and apply semidefinite relax-
ation to upper bound the worst-case errors between the
average pose and the groundtruth. On the LineMOD Oc-
clusion dataset we demonstrate: (i) the PURSE covers the
groundtruth with valid probabilities; (ii) the worst-case er-
ror bounds provide correct uncertainty quantification; and
(iii) the average pose achieves better or similar accuracy as
representative methods based on sparse keypoints. | 1. Introduction
Estimating object poses from images is a fundamental
problem in computer vision and finds extensive applications
in augmented reality [ 42], autonomous driving [ 80], robotic
manipulation [ 60], and space robotics [ 19]. One of the most
popular paradigms for object pose estimation is a two-stage
pipeline [ 20,71,72,79,81,85,89,101], where the first stage
detects (semantic) keypoints of the objects on the image,
and the second stage computes the object pose by solving
an optimization known as Perspective- n-Points (PnP) that
minimizes reprojection errors of the detected keypoints.
Safety-critical applications call for provably correct
computer vision algorithms. Existing algorithms in the two-
stage paradigm (reviewed in Section 2), however, provide
few performance guarantees on the quality of the estimated
poses, due to three challenges. (C1) It is difficult to en-
sure the detected keypoints (typically from neural networks)
are close to the groundtruth keypoints. In practice, the
first stage often outputs keypoints that are arbitrarily wrong,
known as outliers . (C2) Robust estimation is employed in
the second stage to reject outliers, leading to nonconvex op-
timizations. Fast heuristics such as RANSAC [26] are widely
adopted to find an approximate solution but they cannot
guarantee global optimality and often fail without notice.
(C3) There is no provably correct uncertainty quantification
of the estimation, notably, a formal worst-case error bound
between the estimation and the groundtruth. Though recent
work [ 98] proposed convex relaxations to certify global op-
timality of RANSAC and addressed (C2), it cannot ensure
correct estimation as the optimal pose may be far away from
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8947
the correct pose when the keypoints are unreliable.
Contributions . We propose a two-stage object pose es-
timation framework with statistical guarantees , illustrated
in Fig. 1. Given an input image, we assume a neural net-
work [ 71] is available to generate heatmap predictions of
the object keypoints (Fig. 1(a)). Our framework then pro-
ceeds in two stages, namely conformal keypoint detection
(Section 4) and geometric uncertainty propagation (Sec-
tion5). We first apply the statistical machinery of induc-
tive conformal prediction (introduced in Section 3), with
nonconformity functions inspired by the design of resid-
ual functions in classical geometric vision [ 39], to con-
formalize the heatmaps into circular or elliptical predic-
tion sets –one for each keypoint– that guarantee coverage
of the groundtruth keypoints with a user-specified marginal
probability (Fig. 1(b)). This provides a simple and general
methodology to bound the keypoint prediction errors ( i.e.,
addressing (C1)). Given the keypoint prediction sets, we
reformulate the constraints (enforced by the prediction sets)
on the keypoints as constraints on the object pose, leading to
aPose UnceRtainty SEt (PURSE ) that guarantees coverage
of the groundtruth pose with the same probability. Fig. 1(c)
plots the boundary of an example PURSE (roll, pitch, raw
angles for the rotation, and Euclidean coordinates for the
translation). The PURSE , however, is an abstract nonconvex
set that does not directly admit estimated poses and uncer-
tainty. Therefore, we develop RANdom SAmple averaGing
(RANSAG ) to compute an average pose (Fig. 1(d)) and em-
ploy semidefinite relaxations to upper bound the worst-case
rotation and translation errors between the average pose and
the groundtruth (Fig. 1(e)). This gives rise to the first kind of
computable worst-case probabilistic error bounds for object
pose estimation ( i.e., addressing (C3)). Our PURSE method-
ology has connections to the framework of unknown-but-
bounded noise estimation in control theory [ 63], with spe-
cial provisions to derive the bounds in a statistically princi-
pled way and enable efficient computation.
We test our framework on the LineMOD Occlusion ( LM-
O) dataset [ 11] to verify the correctness of the theory (Sec-
tion 6). First, we empirically show that the PURSE in-
deed contains the groundtruth pose according to the user-
specified probability. Second, we demonstrate the correct-
ness of the worst-case error bounds: when the PURSE con-
tains the groundtruth, our bounds are always larger than,
and in many cases close to, the actual errors between the av-
erage pose and the groundtruth pose. Third, we benchmark
the accuracy of the average pose (coming from RANSAG )
with representative two-stage pipelines based on sparse key-
points ( e.g., PVNet [ 72]) and show that the average pose
achieves better or similar accuracy.
Limitations . A drawback of our approach, and confor-
mal prediction in general, is that the size of the prediction
sets depends on the nonconformity function (whose designcan be an art) and may be conservative. Our experiments
suggest the bounds are loose when the keypoint prediction
sets are large ( e.g., giving 180 rotation bound). We discuss
challenges and opportunities in tightening the bounds.
|
Yan_Long-Term_Visual_Localization_With_Mobile_Sensors_CVPR_2023 | Abstract
Despite the remarkable advances in image matching and
pose estimation, image-based localization of a camera in
a temporally-varying outdoor environment is still a chal-
lenging problem due to huge appearance disparity between
query and reference images caused by illumination, seasonal
and structural changes. In this work, we propose to leverage
additional sensors on a mobile phone, mainly GPS, compass,
and gravity sensor, to solve this challenging problem. We
show that these mobile sensors provide decent initial poses
and effective constraints to reduce the searching space in
image matching and final pose estimation. With the initial
pose, we are also able to devise a direct 2D-3D matching
network to efficiently establish 2D-3D correspondences in-
stead of tedious 2D-2D matching in existing systems. As no
public dataset exists for the studied problem, we collect a
new dataset that provides a variety of mobile sensor data
and significant scene appearance variations, and develop a
system to acquire ground-truth poses for query images. We
benchmark our method as well as several state-of-the-art
baselines and demonstrate the effectiveness of the proposed
approach. Our code and dataset are available on the project
page: https://zju3dv.github.io/sensloc/ .
| 1. Introduction
Visual localization aims at estimating the camera trans-
lation and orientation for a given image relative to a known
scene. Solving this problem is crucial for many applications
such as autonomous driving [11], robot navigation [37] and
augmented and virtual reality [7, 41].
State-of-the-art approaches to visual localization typi-
cally involve matching 3D points in a pre-built map and
2D pixels in a query image [4, 22, 42, 50 –52, 67, 69]. An
intermediate image retrieval step [2, 21, 23, 46] is often ap-
plied to determine which parts of the scene are likely visible
The authors from Zhejiang University are affiliated with the State
Key Lab of CAD&CG and ZJU-SenseTime Joint Lab of 3D Vision.
†Corresponding author: Xiaowei Zhou.
AA
Reference Query
Illumination
changesDark
nightRainy
weatherCross
seasonsNew
construction
Figure 1. Visual localization under extremely challenging condi-
tions. The proposed benchmark dataset SensLoc exhibits long-term
appearance changes due to illumination, weather, season, day-night
alternation, and new constructions.
in the query image, in order to handle large-scale scenes.
The resulting camera poses are estimated using a standard
Perspective-n-Point (PnP) solver [24, 32] inside a robust
RANSAC [3, 12, 13, 18] loop. However, in real-world out-
door scenarios, obtaining such correspondences and further
recovering the 6-DoF pose are difficult, since the outdoor
scenes can experience large appearance variations caused
by illumination (e.g., day and night), seasonal (e.g., summer
and winter) and structure changes. Visual localization under
such challenging conditions remains an unsolved problem,
as reported by recent benchmarks [48, 53, 76].
Fortunately, nowadays, with the popularity of smart de-
vices that come equipped with various sensors such as In-
ertial Measurement Unit (IMU), gravity, compass, GPS, or
radio signals (like WiFi and Bluetooth), new possibilities
arise for mobile phone pose estimation exploiting these addi-
tional multi-modality sensors. Nevertheless, previous works
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17245
only take independent sensors into consideration. For ex-
ample, some methods utilize GPS as a prior to bound the
Visual-Inertial Odometry (VIO) drift [31, 45, 56, 75] or sim-
plify the image retrieval process [35, 72, 73], while others
focus on employing the gravity direction as a reliable prior
to benefit the PnP solver [1, 20, 33, 52, 63–65].
In this paper, we introduce a novel framework, named
SensLoc, that localizes an image by tightly coupling visual
perception with complementary mobile sensor information
for robust localization under extreme conditions. In the
first stage, our approach leverages GPS and compass to
constrain the search space for image retrieval, which not
only reduces the risk of misrepresentation of global features
but also speedups the retrieval procedure with fewer database
candidates. In the second stage, inspired by the recent CAD-
free object pose estimation method, OnePose++ [26], we
design a transformer-based network to directly match 3D
points of the retrieved sub-map to dense 2D pixels of the
query photo in a coarse-to-fine manner. Compared to the
modern visual localization pipeline, which establishes 2D-
3D correspondences by repeatedly matching 2D-2D local
features between query and retrieve images, our solution
shows a significant acceleration and a better performance
especially under challenging appearance changes. In the last
stage, we implement a simple yet effective gravity validation
algorithm and integrate it into the RANSAC loop to filter the
wrong pose hypotheses, leveraging the precise roll and pitch
angles from mobile gravity sensors. The gravity validation
leads to an improvement of RANSAC in terms of efficiency
and accuracy, as false hypotheses can be removed in advance.
To the best of our knowledge, there is no public dataset
for multi-sensor localization under strong visual changes. To
facilitate the research of this area, we created a new bench-
mark dataset SensLoc , as shown in Fig. 1. Specifically, we
first used a consumer-grade panoramic camera (Insta360)
and a handheld Real-time Kinematic (RTK) recorder, to cap-
ture and reconstruct a large-scale reference map. Half a year
later, we collected query sequences with large scene appear-
ance changes through a mobile phone bounded with a RTK
and recorded all available built-in sensor data. As direct
registration between the query sequences and the reference
map is difficult, we rebuilt an auxiliary map at the same
time as acquiring the query sequences using Insta360 and
RTK and aligned the auxiliary map with the reference map
through ICP. Thus, we only needed to register the query im-
ages with the auxiliary map, which was easier as they were
captured at the same time. To achieve this, we developed
a pseudo-ground-truth (GT) generation algorithm to accu-
rately register each query sequence against the auxiliary map
by incorporating feature matching, visual-inertial-odometry,
RTK positions and gravity directions. The GT generation
algorithm does not ask for any manual intervention or extra
setup in the environment, enabling scalable pose labeling.We evaluate several state-of-the-art image retrieval and
localization methods on our proposed dataset. We show that
the performance of the existing methods can be drastically
improved by considering sensor priors available in mobile
devices, such as GPS, compass, and gravity direction. The
experiments also demonstrate that our method outperforms
the state-of-the-art approach HLoc [50,51] by a large margin
in challenging night-time environments, while taking only
66msto find 2D-3D correspondences on a GPU and 8ms
for PnP RANSAC.
In summary, our main contributions include:
•A novel outdoor visual localization framework with
multi-sensor prior for robust and accurate localization
under extreme visual changes.
•A new dataset for multi-sensor visual localization with
seasonal and illumination variations.
•Benchmarking existing methods and demonstrating the
effectiveness of the proposed approach.
|
Zhang_CLAMP_Prompt-Based_Contrastive_Learning_for_Connecting_Language_and_Animal_Pose_CVPR_2023 | Abstract
Animal pose estimation is challenging for existing
image-based methods because of limited training data
and large intra- and inter-species variances. Motivated
by the progress of visual-language research, we propose
that pre-trained language models ( e.g., CLIP) can fa-
cilitate animal pose estimation by providing rich prior
knowledge for describing animal keypoints in text. How-
ever, we found that building effective connections between
pre-trained language models and visual animal keypoints
is non-trivial since the gap between text-based descrip-
tions and keypoint-based visual features about animal pose
can be significant. To address this issue, we introduce
a novel prompt-based Contrastive learning scheme for
connecting Language and AniMalPose (CLAMP) effec-
tively. The CLAMP attempts to bridge the gap by adapt-
ing the text prompts to the animal keypoints during net-
work training. The adaptation is decomposed into spatial-
aware and feature-aware processes, and two novel con-
trastive losses are devised correspondingly. In practice,
the CLAMP enables the first cross-modal animal pose es-
timation paradigm. Experimental results show that our
method achieves state-of-the-art performance under the su-
pervised, few-shot, and zero-shot settings, outperforming
image-based methods by a large margin. The code is avail-
able at https://github.com/xuzhang1199/CLAMP .
| 1. Introduction
Animal pose estimation aims to locate and identify a se-
ries of animal body keypoints from an input image. It plays
a key role in animal behavior understanding, zoology, and
wildlife conservation which can help study and protect an-
imals better. Although the animal pose estimation task is
analogous to human pose estimation [2] to some extent, we
argue that the two tasks are very different. For example, ani-
mal pose estimation involves multiple animal species, while
Spatial-aware Adaptation
Text Prompt: ‘left front paw’
Effective Animal
Pose EstimationCLAMP
Feature-aware Adaptation
Spatial-aware AdaptationCLAMP
Feature-aware Adaptation
Figure 1. Conceptualized visualization of our CLAMP method.
Regarding the animal pose estimation task, we proposed to exploit
rich language information from texts to facilitate the visual iden-
tification of animal keypoints. To better connect texts and animal
images, we devise the CLAMP to adapt pre-trained language mod-
els via a spatial-aware and a feature-aware process. As a result, the
CLAMP helps deliver better animal pose estimation performance.
human pose estimation only focuses on one category. Be-
sides, it is much more difficult to collect and annotate ani-
mal pose data covering different animal species, thus exist-
ing animal pose datasets are several times smaller than the
human pose datasets [20] regarding the number of samples
per species. Recently, Yu et al. [38] attempted to alleviate
this problem by presenting the largest animal pose estima-
tion dataset, i.e., AP-10K, which contains 10K images from
23 animal families and 54 species and provides the base-
line performance of SimpleBaseline [32] and HRNet [31].
Despite this progress, the volume of this dataset is still far
smaller than the popular human pose dataset, such as MS
COCO [20] with 200K images.
With diverse species and limited data, current animal
pose datasets usually have large variances in animal poses
which include both intra-species and inter-species vari-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23272
ances. More specifically, the same animal can have diverse
poses, e.g., pandas can have poses like standing, crawling,
sitting, and lying down. Besides, the difference in the poses
of different animal species can also be significant, e.g.,
horses usually lie down to the ground, while monkeys can
be in various poses. Furthermore, even with the same pose,
different animals would have different appearances. As
an example, the joints of monkeys are wrinkled and hairy,
while those of hippos are smooth and hairless. As a result, it
could be extremely challenging for current human pose esti-
mation methods to perform well on the animal pose estima-
tion task without sufficient training data. Although image-
based pre-training methodologies can be helpful in mitigat-
ing the problem of insufficient data, the huge gap between
the pre-training datasets ( e.g. ImageNet [7] image classifi-
cation dataset and MS COCO human pose dataset [20]) and
the animal pose datasets could compromise the benefits of
pre-training procedures.
Rather than only using images to pre-train models, we
notice that the keypoints of different poses and different an-
imals share the same description in natural languages, thus
the language-based pre-trained models can be beneficial to
compensate for the shortage of animal image data. For
example, if a pre-trained language model provides a text
prompt of “a photo of the nose”, we can already use it to
identify the presence of the nose keypoint in the image with-
out involving too much training on the new dataset. For-
tunately, a recently proposed Contrastive Language-Image
Pre-training (CLIP) [28] model can provide a powerful
mapping function to pair the image with texts effectively.
Nevertheless, we found that fine-tuning the CLIP on the an-
imal pose dataset could still suffer from large gaps between
the language and the images depicting animals. In partic-
ular, the vanilla CLIP model only learns to provide a text
prompt with general language to describe the entire image,
while the animal pose estimation requires pose-specific de-
scriptions to identify several different keypoints with their
locations estimated from the same image. To this end, it is
important to adapt the pre-trained language model to the an-
imal pose dataset and effectively exploit the rich language
knowledge for animal pose estimation.
To address the above issue, we propose a novel prompt-
based contrastive learning scheme for effectively connect-
ing language and animal pose (called CLAMP), enabling
the first cross-modal animal pose estimation paradigm. In
particular, we design pose-specific text prompts to describe
different animal keypoints, which will be further embed-
ded using the language model with rich prior knowledge.
By adapting the pose-specific text prompts to visual animal
keypoints, we can effectively utilize the knowledge from the
pre-trained language model for the challenging animal pose
estimation. However, there is a significant gap between the
pre-trained CLIP model (which generally depicts the entireimage) and the animal pose task (which requires the specific
keypoint feature discriminative to and aligned with given
text descriptions). To this end, we decompose the compli-
cated adaptation into a spatial and feature-aware process.
Specifically, we devise a spatial-level contrastive loss to
help establish spatial connections between text prompts and
the image features. A feature-level contrastive loss is also
devised to make the visual features and embedded prompts
of different keypoints more discriminative to each other and
align their semantics in a compatible multi-modal embed-
ding space. With the help of the decomposed adaptation, ef-
fective connections between the pre-trained language model
and visual animal poses are established. Such connections
with rich prior language knowledge can help deliver better
animal pose prediction.
In summary, the contribution of this paper is threefold:
• We propose a novel cross-modal animal pose estima-
tion paradigm named CLAMP to effectively exploit
prior language knowledge from the pre-trained lan-
guage model for better animal pose estimation.
• We propose to decompose the cross-modal adaptation
into a spatial-aware process and a feature-aware pro-
cess with carefully designed losses, which could effec-
tively align the language and visual features.
• Experiments on two challenging datasets in three set-
tings, i.e., 1) AP-10K [38] dataset (supervised learn-
ing, few-shot learning, and zero-shot learning) and 2)
Animal-Pose [4] dataset (supervised learning), vali-
date the effectiveness of the CLAMP method.
|
Yao_Local_Implicit_Normalizing_Flow_for_Arbitrary-Scale_Image_Super-Resolution_CVPR_2023 | Abstract
Flow-based methods have demonstrated promising re-
sults in addressing the ill-posed nature of super-resolution
(SR) by learning the distribution of high-resolution (HR)
images with the normalizing flow. However, these methods
can only perform a predefined fixed-scale SR, limiting their
potential in real-world applications. Meanwhile, arbitrary-
scale SR has gained more attention and achieved great
progress. Nonetheless, previous arbitrary-scale SR meth-
ods ignore the ill-posed problem and train the model with
per-pixel L1 loss, leading to blurry SR outputs. In this work,
we propose “Local Implicit Normalizing Flow” (LINF) as
a unified solution to the above problems. LINF models the
distribution of texture details under different scaling fac-
tors with normalizing flow. Thus, LINF can generate photo-
realistic HR images with rich texture details in arbitrary
scale factors. We evaluate LINF with extensive experiments
and show that LINF achieves the state-of-the-art perceptual
quality compared with prior arbitrary-scale SR methods.
| 1. Introduction
Arbitrary-scale image super-resolution (SR) has gained
increasing attention recently due to its tremendous appli-
cation potential. However, this field of study suffers from
two major challenges. First, SR aims to reconstruct high-
resolution (HR) image from a low-resolution (LR) counter-
part by recovering the missing high-frequency information.
This process is inherently ill-posed since the same LR im-
age can yield many plausible HR solutions. Second, prior
deep learning based SR approaches typically apply upsam-
pling with a pre-defined scale in their network architectures,
such as squeeze layer [ 1], transposed convolution [ 2], and
sub-pixel convolution [ 3]. Once the upsampling scale is de-
termined, they are unable to further adjust the output res-
olutions without modifying their model architecture. This
causes inflexibility in real-world applications. As a result,
* and †indicate equal contribution. This work was developed during
the internship of Jie-En Yao and Li-Yuan Tsao at MediaTek Inc.
Figure 1. A comparison of the previous arbitrary-scale SR ap-
proaches and LINF. LINF models the distribution of texture details
in HR images at arbitrary scales. Therefore, unlike the prior meth-
ods that tend to produce blurry images, LINF is able to generate
arbitrary-scale HR images with rich and photo-realistic textures.
discovering a way to perform arbitrary-scale SR and pro-
duce photo-realistic HR images from an LR image with a
single model has become a crucial research direction.
A natural approach to addressing the one-to-many in-
verse problem in SR is to consider the solution as a dis-
tribution. Consequently, a number of generative-based SR
methods [ 1,4–8] have been proposed to tackle this ill-
posed problem. Among them, flow-based SR methods
show promise, as normalizing flow [ 9–12] offers several
advantages over other generative models. For instance,
flow does not suffer from the training instability and mode
collapse issues present in generative adversarial networks
(GANs) [ 13]. Moreover, flow-based methods are compu-
tationally efficient compared to diffusion [ 14] and autore-
gressive (AR) [ 15,16] models. Representative flow-based
models, such as SRFlow [ 1] and HCFlow [ 7], are able to
generate high-quality SR images and achieve state-of-the-
art results on the benchmarks. However, these methods are
restricted to fixed-scale SR, limiting their applicability.
Another line of research focuses on arbitrary-scale SR.
LIIF [ 17] employs local implicit neural representation to
represent images in a continuous domain. It achieves
arbitrary-scale SR by replacing fixed-scale upsample mod-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1776
ules with an MLP to query the pixel value at any coordi-
nate. LTE [ 18] further estimates the Fourier information at
a given coordinate to make MLP focus on learning high-
frequency details. However, these works did not explicitly
account for the ill-posed nature of SR. They adopt a per-
pixel L1loss to train the model in a regression fashion. The
reconstruction error favors the averaged output of all possi-
ble HR images, leading the model to generate blurry results.
Based on the observation above, combining flow-based
SR model with the local implicit module is a promising di-
rection in which flow can account for the ill-posed nature
of SR, and the local implicit module can serve as a solu-
tion to the arbitrary-scale challenge. Recently, LAR-SR [ 8]
claimed that details in natural images are locally correlated
without long-range dependency. Inspired by this insight, we
formulated SR as a problem of learning the distribution of
local texture patch. With the learned distribution, we per-
form super-resolution by generating the local texture sepa-
rately for each non-overlapping patch in the HR image.
With the new problem formulation, we present Local Im-
plicit Normalizing Flow (LINF) as the solution. Specifi-
cally, a coordinate conditional normalizing flow model sthe
local texture patch distribution, which is conditioned on the
LR image, the central coordinate of local patch, and the
scaling factor. To provide the conditional signal for the
flow model, we use the local implicit module to estimate
Fourier information at each local patch. LINF excels the
previous flow-based SR methods with the capability to up-
scale images with arbitrary scale factors. Different from
prior arbitrary-scale SR methods, LINF explicitly addresses
the ill-posed issue by learning the distribution of local tex-
ture patch. As shown in Fig 1, hence, LINF can generate
HR images with rich and reasonable details instead of the
over-smoothed ones. Furthermore, LINF can address the is-
sue of unpleasant generative artifacts, a common drawback
of generative models, by controlling the sampling tempera-
ture. Specifically, the sampling temperature in normalizing
flow controls the trade-off between PSNR (fidelity-oriented
metric) and LPIPS [ 19] (perceptual-oriented metric). The
contributions of this work can be summarized as follows:
•We proposed a novel LINF framework that leverages
the advantages of a local implicit module and normal-
izing flow. To the best of our knowledge, LINF is the
first framework that employs normalizing flow to gen-
erate photo-realistic HR images at arbitrary scales.
•We validate the effectiveness of LINF to serve as a uni-
fied solution for the ill-posed and arbitrary-scale chal-
lenges in SR via quantitative and qualitative evidences.
•We examine the trade-offs between the fidelity- and
perceptual-oriented metrics, and show that LINF does
yield a better trade-off than the prior SR approaches. |
Zhang_Lite-Mono_A_Lightweight_CNN_and_Transformer_Architecture_for_Self-Supervised_Monocular_CVPR_2023 | Abstract
Self-supervised monocular depth estimation that does
not require ground truth for training has attracted attention
in recent years. It is of high interest to design lightweight
but effective models so that they can be deployed on edge
devices. Many existing architectures benefit from using
heavier backbones at the expense of model sizes. This paper
achieves comparable results with a lightweight architecture.
Specifically, the efficient combination of CNNs and Trans-
formers is investigated, and a hybrid architecture called
Lite-Mono is presented. A Consecutive Dilated Convolu-
tions (CDC) module and a Local-Global Features Interac-
tion (LGFI) module are proposed. The former is used to
extract rich multi-scale local features, and the latter takes
advantage of the self-attention mechanism to encode long-
range global information into the features. Experiments
demonstrate that Lite-Mono outperforms Monodepth2 by
a large margin in accuracy, with about 80% fewer train-
able parameters. Our codes and models are available at
https://github.com/noahzn/Lite-Mono .
| 1. Introduction
Many applications in the field of robotics, autonomous
driving, and augmented reality rely on depth maps, which
represent the 3D geometry of a scene. Since depth sen-
sors increase costs, research on inferring depth maps us-
ing Convolutional Neural Networks (CNNs) from images
emerged. With the annotated depth one can train a regres-
sion CNN to predict the depth value of each pixel on a sin-
gle image [ 10,11,22]. Lacking large-scale accurate dense
ground-truth depth for supervised learning, self-supervised
methods that seek supervisory signals from stereo-pairs of
frames or monocular videos are favorable and have made
great progress in recent years. These methods regard the
depth estimation task as a novel view synthesis problem
and minimize an image reconstruction loss [ 5,14,15,41,45].
*Corresponding Author
Input Monodepth2 R-MSFM6 Lite-Mono (Ours)
Figure 1. The proposed Lite-Mono has fewer parameters than
Monodepth2 [ 15] and R-MSFM [ 46], but generates more accu-
rate depth maps.
The camera motion is known when using stereo-pairs of im-
ages, so a single depth estimation network is adopted to pre-
dict depth. But if only using monocular videos for training
an additional pose network is needed to estimate the motion
of the camera. Despite this, self-supervised methods that
only require monocular videos are preferred, as collecting
stereo data needs complicated configurations and data pro-
cessing. Therefore, this paper also focuses on monocular
video training.
In addition to increasing the accuracy of monocular
training by introducing improved loss functions [ 15] and
semantic information [ 5,21] to mitigate the occlusion and
moving objects problems, many works focused on design-
ing more effective CNN architectures [ 17,33,39,41,46].
However, the convolution operation in CNNs has a local
receptive field, which cannot capture long-range global in-
formation. To achieve better results a CNN-based model
can use a deeper backbone or a more complicated archi-
tecture [ 15,28,44], which also results in a larger model
size. The recently introduced Vision Transformer (ViT) [ 8]
is able to model global contexts, and some recent works ap-
ply it to monocular depth estimation architectures [ 3,35] to
obtain better results. However, the expensive calculation of
the Multi-Head Self-Attention (MHSA) module in a Trans-
former hinders the design of lightweight and fast inference
models, compared with CNN models [ 35].
This paper pursues a lightweight and efficient self-
supervised monocular depth estimation model with a hy-
brid CNN and Transformer architecture. In each stage of
the proposed encoder a Consecutive Dilated Convolutions
(CDC) module is adopted to capture enhanced multi-scale
local features. Then, a Local-Global Features Interaction
(LGFI) module is used to calculate the MHSA and encode
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18537
global contexts into the features. To reduce the computa-
tional complexity the cross-covariance attention [ 1] is cal-
culated in the channel dimension instead of the spatial di-
mension. The contributions of this paper can be summa-
rized in three aspects.
•A new lightweight architecture, dubbed Lite-Mono,
for self-supervised monocular depth estimation, is pro-
posed. Its effectiveness with regard to the model size
and FLOPs is demonstrated.
•The proposed architecture shows superior accuracy
on the KITTI [ 13] dataset compared with competitive
larger models. It achieves state-of-the-art with the least
trainable parameters. The model’s generalization abil-
ity is further validated on the Make3D [ 32] dataset.
Additional ablation experiments are conducted to ver-
ify the effectiveness of different design choices.
•The inference time of the proposed method is tested on
an NVIDIA TITAN Xp and a Jetson Xavier platform,
which demonstrates its good trade-off between model
complexity and inference speed.
The remainder of the paper is organized as follows. Sec-
tion2reviews some related research work. Section 3illus-
trates the proposed method in detail. Section 4elaborates
on the experimental results and discussion. Section 5con-
cludes the paper.
|
Yan_NeRF-DS_Neural_Radiance_Fields_for_Dynamic_Specular_Objects_CVPR_2023 | Abstract
Dynamic Neural Radiance Field (NeRF) is a powerful
algorithm capable of rendering photo-realistic novel view
images from a monocular RGB video of a dynamic scene.
Although it warps moving points across frames from the
observation spaces to a common canonical space for ren-
dering, dynamic NeRF does not model the change of the
reflected color during the warping. As a result, this ap-
proach often fails drastically on challenging specular ob-
jects in motion. We address this limitation by reformulat-
ing the neural radiance field function to be conditioned on
surface position and orientation in the observation space.
This allows the specular surface at different poses to keep
the different reflected colors when mapped to the common
canonical space. Additionally, we add the mask of moving
objects to guide the deformation field. As the specular sur-
face changes color during motion, the mask mitigates the
problem of failure to find temporal correspondences with
only RGB supervision. We evaluate our model based on the
novel view synthesis quality with a self-collected dataset of
different moving specular objects in realistic environments.
The experimental results demonstrate that our method sig-
nificantly improves the reconstruction quality of moving
specular objects from monocular RGB videos compared to
the existing NeRF models. Our code and data are available
at the project website1.
| 1. Introduction
Neural Radiance Fields (NeRF) [25] trained with multi-
view images can synthesize novel views for 3D scenes with
photo-realistic quality. NeRF predicts the volume density
and view dependent color of the sampled spatial points
in the scene with a multi-layer perceptron (MLP). Recent
works such as Nerfies [32] and NSFF [22] extend NeRF to
reconstruct dynamic scenes from monocular videos. They
resolve the lack of multi-view image supervision in dy-
namic scenes using a deformation field, which warps dif-
1https://github.com/JokerYan/NeRF-DS
HyperNeRF NeRF-DS (ours)
Figure 1. Comparison of novel views rendered by HyperN-
eRF [33] (left) and our NeRF-DS (right), on the “americano” scene
in the HyperNeRF dataset [33]2(top) and the “basin” scene in our
dynamic specular dataset (bottom). Our NeRF-DS model signifi-
cantly improves the reconstruction quality by a surface-aware dy-
namic NeRF and a mask guided deformation field.
ferent observation spaces to a common canonical space.
Despite showing promising results, we find that the ex-
isting dynamic NeRFs do not consider specular reflections
during warping and often fail drastically on challenging dy-
namic specular objects as shown in Fig. 1. The quality
of dynamic specular object reconstruction is important be-
cause specular (e.g. metallic, plastic) surfaces are common
in our daily environment and furthermore it indicates how
accurate a dynamic NeRF represents the radiance field un-
der motion or deformation. Previous works such as Ref-
NeRF [50] and NeRV [45] have only focused on improving
the specular reconstruction in static scenes. The problem of
reconstructing dynamic specular objects with NeRF remain
largely unexplored.
We postulate that one of the reasons for dynamic models
to fail on moving specular objects is because they do not
2The rendered frames come from the first 3 seconds of the “americano”
scene when the cup is rotating. This part of the video is not included in the
HyperNeRF [33] qualitative results.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8285
consider the original surface information when rendering in
a common canonical space. As suggested in rendering mod-
els such as Phong shading [34], the specular color depends
on the relative position and orientation of the surface with
respect to the reflected environment. Nonetheless, existing
dynamic NeRFs often ignore the original position and ori-
entation of the surface when warping a specular object to
a common canonical space for rendering. As the result, a
point on a specular object reflecting different colors at dif-
ferent positions and orientations can cause conflicts when
warped to a common canonical space. Additionally, the key
of existing dynamic models is to learn a deformation field
for each frame such that correspondences can be established
in a shared canonical space. However, the color of specular
objects can vary significantly at different locations and ori-
entations, which makes it hard to establish correspondences
with the RGB supervision alone. These two limitations in-
evitably lead to the failure of existing dynamic models when
applied to specular objects.
In this paper, we introduce NeRF-DS (Fig. 2) which
models dynamic specular objects using a surface-aware dy-
namic NeRF and a mask guided deformation field to miti-
gate the two limitations mentioned above. 1) Our NeRF-DS
still warps the points from the observation space to a com-
mon canonical space and predicts their volume density. In
contrast to other dynamic NeRFs, the color of each point is
additionally conditioned on the spatial coordinate and sur-
face normal in the observation space before warping. Cor-
responding points from different frames can share the same
geometry, but reflect different colors determined by their
original surface position and orientation. 2) Our NeRF-DS
reuses the moving object mask from the camera registra-
tion stage as an additional input to the deformation field.
This mask is a more consistent guidance for specular sur-
faces in motion compared to the constantly changing color.
The mask is also a strong cue to the deformation field on
the moving and static regions. As shown in Fig. 1, our pro-
posed NeRF-DS reconstructs and renders dynamic specular
scenes with significantly higher quality.
We implement our NeRF-DS on top of the
state-of-the-art HyperNeRF [33] for dyanmic scenes.
Since there are very limited dynamic specular objects in
the existing datasets, we collect another dynamic specular
dataset for evaluation. Our dataset consists of a variety
of moving/deforming specular objects in realistic environ-
ments. Experimental results on the dataset demonstrate that
the NeRF-DS significantly improves the quality of novel
view rendering on dynamic specular objects. The images
rendered by our NeRF-DS avoid many serious artifacts
compared to the existing NeRF models.
In summary, we have made the following contributions:
1. A reparameterized dynamic NeRF that models dy-
namic specular surface with additional observationspace coordinate and surface normal.
2. A mask guided deformation field that improves defor-
mation learned for dynamic specular objects.
3. A dynamic specular scene dataset with training and
testing monocular videos.
|
Yu_MVImgNet_A_Large-Scale_Dataset_of_Multi-View_Images_CVPR_2023 | Abstract
Being data-driven is one of the most iconic properties
of deep learning algorithms. The birth of ImageNet [24]
drives a remarkable trend of ‘learning from large-scale
data’ in computer vision. Pretraining on ImageNet to ob-
tain rich universal representations has been manifested to
benefit various 2D visual tasks, and becomes a standard
in 2D vision. However, due to the laborious collection of
real-world 3D data, there is yet no generic dataset serv-
ing as a counterpart of ImageNet in 3D vision, thus how
such a dataset can impact the 3D community is unraveled.
To remedy this defect, we introduce MVImgNet , a large-
scale dataset of multi-view images, which is highly conve-
nient to gain by shooting videos of real-world objects in hu-
man daily life. It contains 6.5 million frames from 219,188
videos crossing objects from 238classes, with rich annota-
tions of object masks, camera parameters, and point clouds.The multi-view attribute endows our dataset with 3D-aware
signals, making it a soft bridge between 2D and 3D vision.
We conduct pilot studies for probing the potential of
MVImgNet on a variety of 3D and 2D visual tasks, includ-
ing radiance field reconstruction, multi-view stereo, and
view-consistent image understanding, where MVImgNet
demonstrates promising performance, remaining lots of
possibilities for future explorations.
Besides, via dense reconstruction on MVImgNet, a 3D
object point cloud dataset is derived, called MVPNet , cov-
ering 87,200 samples from 150categories, with the class
label on each point cloud. Experiments show that MVP-
Net can benefit the real-world 3D object classification while
posing new challenges to point cloud understanding.
MVImgNet and MVPNet will be public, hoping to inspire
the broader vision community.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9150
| 1. Introduction
Being data-driven, also known as data-hungry, is one of
the most important attributes of deep learning algorithms.
By training on large-scale datasets, deep neural networks
are able to extract rich representations. In the past few
years, the computer vision community has witnessed the
bloom of such ‘learning from data’ regime [43, 44, 55], af-
ter the birth of ImageNet [24] – the pioneer of large-scale
real-world image datasets. Notably, pretraining on Ima-
geNet is well-proven to boost the model performance when
transferring the pretrained weights into not only high-level
[34, 41, 42, 60, 61] but also low-level visual tasks [15, 57],
and becomes a de-facto standard in 2D. Recently, various
3D datasets [5, 9, 11, 23, 33, 96, 104] are produced to facili-
tate 3D visual applications.
However, due to the non-trivial scanning and labori-
ous labeling of real-world 3D data (commonly organized
in point clouds or meshes), existing 3D datasets are ei-
ther synthetic or their scales are not comparable with Ima-
geNet [24]. Consequently, unlike in 2D vision where mod-
els are usually pretrained on ImageNet to gain universal rep-
resentation or commonsense knowledge, most of the current
methods in 3D area are directly trained and evaluated on
particular datasets for solving specific 3D visual tasks ( e.g.,
NeRF dataset [64] and ShapeNet [11] for novel view syn-
thesis, ModelNet [96] and ScanObjectNN [88] for object
classification, KITTI [33] and ScanNet [23] for scene un-
derstanding). Here, two crucial and successive issues can be
induced: (1)There is still no generic dataset in 3D vision,
as a counterpart of ImageNet in 2D. (2)What benefit such a
dataset can endow to 3D community is yet unknown . In this
paper, we focus on investigating these two problems and set
two corresponding targets: Build the primary dataset, then
explore its effect through experiments.
Milestone 1 – Dataset:
For a clearer picture of the first goal, we start by carefully
revisiting existing 3D datasets as well as ImageNet [24].
i)3D synthetic datasets [11,96] provide rich 3D CAD mod-
els. However, they lack real-world cues ( e.g., context, oc-
clusions, noises), which are indispensable for model robust-
ness in practical applications. ScanObjectNN [88] extracts
real-world 3D objects from indoor scene data, but is limited
in scale. For 3D scene-level dataset [5, 10, 23, 33, 37, 81],
their scales are still constrained by the laborious scanning
and labeling ( e.g., millions of points per scene). Addition-
ally, they contain specific inner-domain knowledge such as
a particularly intricate indoor room or outdoor driving con-
figurations, making it hard for general transfer learning.
ii)Although ImageNet [24] contains the most comprehen-
sive real-world objects, it only describes a 2D world that
misses 3D-aware signals. Since humans live in a 3D world,
3D consciousness is vitally important for realizing human-
like intelligence and solving real-life visual problems.Based on the above review, our dataset is created from
a new insight – multi-view images , as a soft bridge be-
tween 2D and 3D. It lies several benefits to remedying the
aforementioned defects. Such data can be easily gained in
considerable sizes via shooting an object around different
views on common mobile devices with cameras ( e.g., smart-
phones), which can be collected by crowd-sourcing in real
world . Moreover, the multi-view constraint can bring nat-
ural 3D visual signals (later experiments show that this not
only benefits 3D tasks but also 2D image understanding).
To this end, we build MVImgNet , containing 6.5million
frames from 219,188 videos crossing real-life objects from
238classes, with rich annotations of object masks, camera
parameters, and point clouds. You may take a glance at our
MVImgNet from Fig. 1.
Milestone 2 – Experimental Exploration:
Now facing the second goal of this paper, we attempt to
probe the power of our dataset by conducting some pilot
experiments. Leveraging the multi-view nature of the data,
we start by focusing on the view-based 3D reconstruction
task and demonstrate that pretraining on MVImgNet can not
only benefit the generalization ability of NeRF (Sec. 4.1),
but also data-efficient multi-view stereo (Sec. 4.2). More-
over, for image understanding, although humans can easily
recognize one object from different viewpoints, deep learn-
ing models can hardly do that robustly [26,38]. Considering
MVImgNet provides numerous images of a particular ob-
ject from different viewpoints, we verify that MVImgNet-
pretrained models are endowed with decent view consis-
tency in general image classification (supervised learning
in Sec. 5.1, self-supervised contrastive learning in Sec. 5.2)
andsalient object detection (Sec. 5.3).
Bonus – A New 3D Point Cloud Dataset – MVPNet:
Through dense reconstruction on MVImgNet, a new 3D
object point cloud dataset is derived, called MVPNet ,
which contains 87,200 point clouds with 150 categories,
with the class label on each point cloud (see Fig. 7). Exper-
iments show that MVPNet not only benefits the real-world
3D object classification task but also poses new challenges
and prospects to point cloud understanding (Sec. 6).
MVImgNet and MVPNet will be public, hoping to in-
spire the broader vision community.
|
Yue_Connecting_the_Dots_Floorplan_Reconstruction_Using_Two-Level_Queries_CVPR_2023 | Abstract
We address 2D floorplan reconstruction from 3D scans.
Existing approaches typically employ heuristically de-
signed multi-stage pipelines. Instead, we formulate floor-
plan reconstruction as a single-stage structured predic-
tion task: find a variable-size set of polygons, which in
turn are variable-length sequences of ordered vertices.
To solve it we develop a novel Transformer architec-
ture that generates polygons of multiple rooms in paral-
lel, in a holistic manner without hand-crafted intermedi-
ate stages. The model features two-level queries for poly-
gons and corners, and includes polygon matching to make
the network end-to-end trainable. Our method achieves
a new state-of-the-art for two challenging datasets, Struc-
tured3D and SceneCAD, along with significantly faster in-
ference than previous methods. Moreover, it can read-
ily be extended to predict additional information, i.e., se-
mantic room types and architectural elements like doors
and windows. Our code and models are available at:
https://github.com/ywyue/RoomFormer.
| 1. Introduction
The goal of floorplan reconstruction is to turn observa-
tions of an (indoor) scene into a 2D vector map in birds-
eye view. More specifically, we aim to abstract a 3D point
cloud into a set of closed polygons corresponding to rooms,
optionally enriched with further structural and semantic el-
ements like doors, windows and room type labels.
Floorplans are an essential representation that enables
a wide range of applications in robotics, AR/VR, inte-
rior design, etc.Like prior work [2, 3, 8, 9, 29], we start
from a 3D point cloud, which can easily be captured with
RGB-D cameras, laser scanners or SfM systems. Several
works [8, 9, 21, 29] have shown the effectiveness of project-
ing the raw 3D point data along the gravity axis, to obtain a
2D density map that highlights the building’s structural el-
ements ( e.g., walls). We also employ this early transition to
2D image space. The resulting density maps are compact
and computationally efficient, but inherit the noise and data
gaps of the underlying point clouds, hence floorplan recon-Input 3D Point Cloud Reconstructed Floorplan
Figure 1. Semantic floorplan reconstruction. Given a point
cloud of an indoor environment, RoomFormer jointly recovers
multiple room polygons along with their associated room types,
as well as architectural elements such as doors and windows.
struction remains a challenging task.
Existing methods can be split broadly into two categories
that both operate in two stages: Top-down methods [8, 29]
first extract room masks from the density map using neu-
ral networks ( e.g., Mask R-CNN [15]), then employ opti-
mization/search techniques ( e.g., integer programming [28],
Monte-Carlo Tree-Search [4]) to extract a polygonal floor-
plan. Such techniques are not end-to-end trainable, and
their success depends on how well the hand-crafted opti-
mization captures domain knowledge about room shape and
layout. Alternatively, bottom-up methods [9, 21] first de-
tect corners, then look for edges between corners ( i.e., wall
segments) and finally assemble them into a planar floorplan
graph. Both approaches are strictly sequential and therefore
dependent on the quality of the initial corner, respectively
room, detector. The second stage starts from the detected
entities, therefore missing or spurious detections may sig-
nificantly impact the reconstruction.
We address those limitations and design a model that
directly maps a density image to a set of room polygons.
Our model, named RoomFormer , leverages the sequence
prediction capabilities of Transformers and directly out-
puts a variable-length, ordered sequence of vertices per
room. RoomFormer requires neither hand-crafted, domain-
specific intermediate products nor explicit corner, wall or
room detections. Moreover, it predicts all rooms that make
up the floorplan at once, exploiting the parallel nature of the
Transformer architecture.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
845
In more detail, we employ a standard CNN backbone to
extract features from the birds-eye view density map, fol-
lowed by a Transformer encoder-decoder setup that con-
sumes image features (supplemented with positional encod-
ings) and outputs multiple ordered corner sequences, in par-
allel. The floorplan is recovered by simply connecting those
corners in the predicted order. Note that the described pro-
cess relies on the ability to generate hierarchically struc-
tured output of variable and a-priori unknown size, where
each floorplan has a different number of rooms (with no nat-
ural order), and each room polygon has a different number
of (ordered) corners. We address this challenge by introduc-
ing two-level queries with one level for the room polygons
and one level for their corners. The varying numbers of
both rooms and corners are accommodated by additionally
classifying each query as valid or invalid. The decoder it-
eratively refines the queries, through self-attention among
queries and cross-attention between queries and image fea-
tures. To enable end-to-end training, we propose a poly-
gon matching strategy that establishes the correspondence
between predictions and targets, at both room and corner
levels. In this manner, we obtain an integrated model that
holistically predicts a set of polygons to best explain the ev-
idence in the density map, without hand-tuned intermediate
rules of which corners, walls or rooms to commit to along
the way. The model is also fast at inference, since it operates
in single-stage feed-forward mode, without optimization or
search and without any post-processing steps. Moreover, it
is flexible and can, with few straight-forward modifications,
predict additional semantic and structural information such
as room types, doors and windows (Fig. 1).
We evaluate our model on two challenging datasets,
Structured3D [37] and SceneCAD [2]. For both of them,
RoomFormer outperforms the state of the art, while at the
same time being significantly faster than existing methods.
In summary, our contributions are:
• A new formulation of floorplan reconstruction, as
the simultaneous generation of multiple ordered se-
quences of room corners.
• The RoomFormer model, an end-to-end trainable,
Transformer-type architecture that implements the
proposed formulation via two-level queries that pre-
dict a set of polygons each consisting of a sequence of
vertex coordinates.
• Improved floorplan reconstruction scores on both
Structured3D [37] and SceneCAD [2], with faster in-
ference times.
• Model variants able to additionally predict semantic
room type labels, doors and windows.
|
Yu_Data-Free_Knowledge_Distillation_via_Feature_Exchange_and_Activation_Region_Constraint_CVPR_2023 | Abstract
Despite the tremendous progress on data-free knowledge
distillation (DFKD) based on synthetic data generation,
there are still limitations in diverse and efficient data syn-
thesis. It is naive to expect that a simple combination of
generative network-based data synthesis and data augmen-
tation will solve these issues. Therefore, this paper proposes
a novel data-free knowledge distillation method (Spaceship-
Net) based on channel-wise feature exchange (CFE) and
multi-scale spatial activation region consistency (mSARC)
constraint. Specifically, CFE allows our generative net-
work to better sample from the feature space and efficiently
synthesize diverse images for learning the student network.
However, using CFE alone can severely amplify the un-
wanted noises in the synthesized images, which may result
in failure to improve distillation learning and even have
negative effects. Therefore, we propose mSARC to assure
the student network can imitate not only the logit output but
also the spatial activation region of the teacher network in
order to alleviate the influence of unwanted noises in di-
verse synthetic images on distillation learning. Extensive
experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet, Im-
agenette, and ImageNet100 show that our method can work
well with different backbone networks, and outperform the
state-of-the-art DFKD methods. Code will be available at:
https://github.com/skgyu/SpaceshipNet.
*Equal contribution
This research was supported in part by the National Key R&D Pro-
gram of China (grant 2018AAA0102500), and the National Natural Sci-
ence Foundation of China (grant 62176249). | 1. Introduction
Knowledge distillation (KD) aims to train a lightweight
student model that can imitate the capability of a pre-trained
complicated teacher model. In the past decade, KD has been
studied in a wide range of fields such as image recogni-
tion, speech recognition, and natural language processing.
Traditional KD methods usually assume that the whole or
part of the training set used by the teacher network is ac-
cessible by the student network [17, 24, 34]. But in practi-
cal applications, there can be various kinds of constraints
in accessing the original training data, e.g., due to pri-
vacy issues in medical data [1, 2, 5, 20, 28, 35] and portrait
data [3], and copyright and privateness of large data vol-
ume such as JFT-300M [40] and text-image data [37]. The
traditional KD methods no longer work under these sce-
narios. Recently, data-free knowledge distillation (DFKD)
[4,7,10,13,14,22,25,27,29,43,46] seeks to perform KD by
generating synthetic data instead of accessing the original
training data used by the teacher network to train the student
network. Thus, the general framework of DFKD consists of
two parts: synthetic data generation that replicates the orig-
inal data distribution and constraint design between student
and teacher network during distillation learning. Synthetic
data generation methods in DFKD mainly consist of noise
image optimization-based methods [4, 26, 31, 44] and gen-
erative network-based methods [8,9, 12,13, 27,29, 45]. The
former approaches optimize randomly initialized noise im-
ages to make them have the similar distribution to the orig-
inal training data. These methods can theoretically gener-
ate an infinite number of independent and identically dis-
tributed images for student network learning, but they are
usually extremely time-consuming, and thus are difficult
in generating sufficient synthetic data with high diversity.
The later approaches learn a generator to synthesize im-
ages that approximate the distribution of the original train-
ing data. These methods can be much faster than the image
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24266
optimization-based approach, but the diversity of the syn-
thesized data is usually limited because the generation of
different images are not completely independent with each
other.
Despite the encouraging results achieved, DFKD re-
mains a challenging task, because the synthetic data may
have a different distribution from the original data, which
could potentially result in bias in student network learn-
ing. The possible reason is that the noises in the synthe-
sized images can easily lead to the bias of the network’s
region of interest. In addition, the widely used KL diver-
gence constraint between student and teacher networks in
existing DFKD methods may not work well with synthetic
data [4].
This paper proposes a novel DFKD method that utilizes
channel-wise feature exchange (CFE) and multi-scale spa-
tial activation region consistency (mSARC) constraint to
improve knowledge transfer from the teacher network to
the student network. The proposed method enhances the
diversity of synthetic training data and the robustness to un-
wanted noises in synthetic images during distillation. Un-
like previous generative network-based methods that em-
ployed multiple generators to synthesize images [27] or
reinitialization and retraining of generator [13] to enhance
the synthetic training data diversity, our method improves
the synthetic training data diversity by using the features
of early synthetic images to perform CFE. When our gen-
erative network and those of other methods have learned
to generate the same number of synthetic images, the pro-
posed method can produce more diverse training data for
distillation. However, CFE also amplifies unwanted noise
in synthetic images, which may hinder distillation learning
(traditional data augmentation methods also suffer from this
problem, e.g., CutMix [47] and Mixup [50]). To address
this issue, we propose the mSARC constraint, which en-
ables the student network to learn discriminative cues from
similar regions to those used by the teacher network, effec-
tively overcoming the limitations of the traditional KL di-
vergence loss when applied to synthetic images during dis-
tillation learning. Moreover, combining our mSARC with
traditional data augmentation methods [47,50] can still sig-
nificantly improve distillation learning with synthetic data.
We evaluate our method on a number of datasets, includ-
ing CIFAR-10 [19], CIFAR-100 [19], and Tiny-ImageNet
[21]. Our approach demonstrates superior performance
compared to the state-of-the-art DFKD methods. More-
over, we observe that the student networks trained using our
DFKD method achieve comparable performance to those
trained using original training data. Additionally, we eval-
uate our method on subsets of ImageNet [11] with 10 and
100 classes and a resolution of 224×224, validating the
efficacy of our method on generating high-resolution syn-
thetic images for distillation learning. In our ablation study,we verify the effectiveness of the key components (CFE and
mSARC), we find that mSARC plays a particularly impor-
tant role when strong data augmentation is applied to the
synthetic images.
|
Xu_V2V4Real_A_Real-World_Large-Scale_Dataset_for_Vehicle-to-Vehicle_Cooperative_Perception_CVPR_2023 | Abstract
Modern perception systems of autonomous vehicles are
known to be sensitive to occlusions and lack the capabil-
ity of long perceiving range. It has been one of the key
bottlenecks that prevents Level 5 autonomy. Recent re-
search has demonstrated that the Vehicle-to-Vehicle (V2V)
cooperative perception system has great potential to rev-
olutionize the autonomous driving industry. However, the
lack of a real-world dataset hinders the progress of this
field. To facilitate the development of cooperative per-
ception, we present V2V4Real, the first large-scale real-
world multi-modal dataset for V2V perception. The data
is collected by two vehicles equipped with multi-modal
sensors driving together through diverse scenarios. Our
V2V4Real dataset covers a driving area of 410 km, com-
prising 20K LiDAR frames, 40K RGB frames, 240K anno-
tated 3D bounding boxes for 5 classes, and HDMaps that
cover all the driving routes. V2V4Real introduces three
perception tasks, including cooperative 3D object detec-
tion, cooperative 3D object tracking, and Sim2Real domain
adaptation for cooperative perception. We provide compre-
hensive benchmarks of recent cooperative perception algo-
rithms on three tasks. The V2V4Real dataset can be found
at research.seas.ucla.edu/mobility-lab/v2v4real/.
| 1. Introduction
Perception is critical in autonomous driving (A V) for ac-
curate navigation and safe planning. The recent develop-
ment of deep learning brings significant breakthroughs in
various perception tasks such as 3D object detection [22,
35, 42], object tracking [43, 56], and semantic segmenta-
tion [47, 57]. However, single-vehicle vision systems still
suffer from many real-world challenges, such as occlusions
and short-range perceiving capability [15, 40,49], which
*Corresponding Author, email address: [email protected]
(a) Aggregated LiDAR data
(b) HD map
(c) Satallite Map
Figure 1. A data frame sampled from V2V4Real: (a) aggregated
LiDAR data, (b) HD map, and (c) satellite map to indicate the
collective position. More qualitative examples of V2V4Real can
be found in the supplementary materials.
can cause catastrophic accidents. The shortcomings stem
mainly from the limited field-of-view of the individual ve-
hicle, leading to an incomplete understanding of the sur-
rounding traffic.
A growing interest and recent advancement in coop-
erative perception systems have enabled a new paradigm
that can potentially overcome the limitation of single-
vehicle perception. By leveraging vehicle-to-vehicle (V2V)
technologies, multiple connected and automated vehicles
(CA Vs) can communicate and share captured sensor infor-
mation simultaneously. As shown in a complex intersection
in Fig. 1, for example, the ego vehicle (red liDAR) strug-
gles to perceive the upcoming objects located across the
way due to occlusions. Incorporating the LiDAR features
from the nearby CA V (green scans) can largely broaden the
sensing range of the vehicle and make it even see across the
occluded corner.
Despite the great promise, however, it remains chal-
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13712
Dataset YearReal/
SimV2XSize
(km)RGB
imagesLiDAR Maps3D
boxesClasses Locations
Kitti [14] 2012 Real No - 15k 15k No 200k 8 Karlsruhe
nuScenes [2] 2019 Real No 33 1.4M 400k Yes 1.4M 23 Boston, SG
Argo [3] 2019 Real No 290 107k 22k Yes 993k 15 2x USA
Waymo Open [36] 2019 Real No - 1M 200k Yes 12M 4 3x USA
OPV2V [50] 2022 Sim V2V - 44k 11k Yes 230k 1 CARLA
V2X-Sim [20] 2022 Sim V2V&I - 60K 10k Yes 26.6k 1 CARLA
V2XSet [49] 2022 Sim V2V&I - 44K 11k Yes 230k 1 CARLA
DAIR-V2X [54] 2022 Real V2I 20 39K 39K No 464K 10 Beijing, CN
V2V4Real (ours) 2022 Real V2V 410 40K 20K Yes 240K 5 Ohio, USA
Table 1. Comparison of the proposed dataset and existing representative autonomous driving datasets.
lenging to validate V2V perception in real-world scenarios
due to the lack of public benchmarks. Most of the exist-
ing V2V datasets, including OPV2V [50], V2X-Sim [20],
and V2XSet [49], rely on open-source simulators like
CARLA [11] to generate synthetic road scenes and traffic
dynamics with simulated |
Yang_Neural_Volumetric_Memory_for_Visual_Locomotion_Control_CVPR_2023 | Abstract
Legged robots have the potential to expand the reach of
autonomy beyond paved roads. In this work, we consider
the difficult problem of locomotion on challenging terrains
using a single forward-facing depth camera. Due to the
partial observability of the problem, the robot has to rely
on past observations to infer the terrain currently beneath
it. To solve this problem, we follow the paradigm in com-
puter vision that explicitly models the 3D geometry of the
scene and propose Neural Volumetric Memory (NVM), a ge-
ometric memory architecture that explicitly accounts for the
SE(3) equivariance of the 3D world. NVM aggregates fea-
ture volumes from multiple camera views by first bringing
them back to the ego-centric frame of the robot. We test the
learned visual-locomotion policy on a physical robot and
show that our approach, which explicitly introduces geo-
metric priors during training, offers superior performance
than more na ¨ıve methods. We also include ablation studies
and show that the representations stored in the neural vol-
umetric memory capture sufficient geometric information
to reconstruct the scene. Our project page with videos is
https://rchalyang.github.io/NVM
| 1. Introduction
Consider difficult locomotion tasks such as walking up and
down a flight of stairs and stepping over gaps (Figure 1).The control of such behaviors requires tight coupling with
perception because vision is needed to provide details of
the terrain right beneath the robot and the 3D scene imme-
diately around it. This problem is also partially-observable.
Immediately relevant terrain information is often occluded
from the robot’s current frame of observation, forcing it to
rely on past observations for control decisions. For this rea-
son, while blind controllers that are learned in simulation
using reinforcement learning have achieved impressive re-
sults in agility and robustness [ 33,36,38], there are clear
limitations on how much they can do. How to incorporate
perception into the pipeline to produce an integrated visuo-
motor controller thus remains an open problem.
A recent line of work combines perception with loco-
motion using ego-centric cameras mounted on the robot.
The predominant approach for addressing partial observ-
ability is to do frame-stacking, where the robot maintains
a visual buffer of recent images. This na ¨ıve heuristic suf-
fers from two major problems: first, frame-stacking on a
moving robot ignores the equivariance structure of the 3D
environment, making learning a lot more difficult as pol-
icy success now relies on being able to learn to account for
spurious changes in camera poses. A second but subtler is-
sue is that biological systems do not have the ability to save
detailed visual observations pixel-by-pixel. These concerns
motivate the creation of an intermediary, short-term mem-
ory mechanism to functionally aggregate streams of obser-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1430
Stages Stairs Obstacles Stones
Sim2RealStages Stairs Obstacles
Front-Facing
Depth Camera
Stones
Figure 2. Overview of Simulated Environment & Real World Environment: Our simulated environments are shown on the left and
real-world environments are shown on the right. For the real-world environment, the corresponding visual observations for each real-world
environment are shown in the bottom row. All policies are trained in the simulation and transferred into the real world without fine-tuning.
vation into a single, coherent representation of the world.
Motivated by these observations, we introduce a novel
volumetric memory architecture for legged locomotion con-
trol. Our architecture consists of a 2D to 3D feature volume
encoder, and a pose estimator that can estimate the relative
camera poses given two input images. When combined,
the two networks constitute a Neural V olumetric Memory
(NVM) that takes as input a sequence of depth images taken
by a forward-looking, ego-centric camera and fuses them
together into a coherent latent representation for locomo-
tion control. We encourage the memory space to be SE(3)
equivariant to changes in the camera pose of the robot by in-
corporating translation and rotation operations based on es-
timated relative poses from the pose network. This inverse
transformation allows NVM to align feature volumes from
the past to the present ego-centric frame, making both inte-
grating over multiple timesteps into a coherent scene repre-
sentation and learning a policy, less difficult.
Our training pipeline follows a two-step teacher-student
process where the primary goal of the first stage is to pro-
duce behaviors in the form of a policy. After training com-
pletes, this policy can traverse these difficult terrains ro-
bustly, but it relies on privileged sensory information such
as an elevation map and ground-truth velocity. Elevation
maps obtained in the real world are often biased, incom-
plete, and full of errors [ 40], whereas ground-truth velocity
information is typically only available in instrumented en-
vironments. Hence in the visuomotor distillation stage of
the pipeline, which still runs in the simulator, we feed the
stream of ego-centric views from the forward depth cam-
era into the neural volumetric memory. We feed the content
of this memory into a small policy network and train ev-
erything end-to-end including the two network components
of the NVM using a behavior cloning loss where the state-
only policy acts as the teacher. For completeness, we offer
an additional self-supervised learning objective (Figure 4)
that relies on novel-view consistency for learning. The end
product of this visuomotor distillation pipeline is a memory-
equipped visuomotor policy that can operate directly on theUniTree A1 robot hardware (see Figure 1). A single policy
is used to handle all environments covered by this paper.
We provide comprehensive experiment and ablation studies
in both simulation and the real world, and show that our
method outperforms all baselines by a large margin. It is
thus essential to model the 3D structure of the environment.
|
Zhang_Generalization_Matters_Loss_Minima_Flattening_via_Parameter_Hybridization_for_Efficient_CVPR_2023 | Abstract
Most existing online knowledge distillation (OKD) tech-
niques typically require sophisticated modules to produce
diverse knowledge for improving students’ generalization
ability. In this paper, we strive to fully utilize multi-model
settings instead of well-designed modules to achieve a dis-
tillation effect with excellent generalization performance.
Generally, model generalization can be reflected in the flat-
ness of the loss landscape. Since averaging parameters of
multiple models can find flatter minima, we are inspired
to extend the process to the sampled convex combinations
of multi-student models in OKD. Specifically, by linearly
weighting students’ parameters in each training batch, we
construct a Hybrid-Weight Model (HWM) to represent the
parameters surrounding involved students. The supervi-
sion loss of HWM can estimate the landscape’s curva-
ture of the whole region around students to measure the
generalization explicitly. Hence we integrate HWM’s loss
into students’ training and propose a novel OKD frame-
work via parameter hybridization (OKDPH) to promote
flatter minima and obtain robust solutions. Considering
the redundancy of parameters could lead to the collapse
of HWM, we further introduce a fusion operation to keep
the high similarity of students. Compared to the state-of-
the-art (SOTA) OKD methods and SOTA methods of seek-
ing flat minima, our OKDPH achieves higher performance
with fewer parameters, benefiting OKD with lightweight
and robust characteristics. Our code is publicly available
at https://github.com/tianlizhang/OKDPH.
| 1. Introduction
Deep learning achieves breakthrough progress in a va-
riety of tasks by constructing a large capacity network
†Corresponding authorpre-trained on massive data [6]. In order to apply high-
parameterized models in the real-world scene with lim-
ited resources, the knowledge distillation (KD) technique
[12] aims to obtain a compact and effective student model
guided by a large-scale teacher model for model compres-
sion. Based on the developments of KD, Zhang et al. [44]
propose the concept of online knowledge distillation (OKD)
to view all networks as students and achieve mutual learn-
ing from scratch through peer teaching, liberating the dis-
tillation process from the dependency on pre-trained teach-
ers. Existing OKD methods mainly encourage students to
acquire diverse and rich knowledge, including aggregating
predictions [10, 33, 37], combining features [19, 22, 28],
working with peers [39, 47], learning from group lead-
ers [2], and receiving guidance from online teachers [39].
Nevertheless, these strategies focus on designing sophis-
ticated architectures to exploit heterogeneous knowledge to
enhance students’ generalization, but they lack explicit con-
straints on generalization. The concept of generalization to
deep models is the ability to fit correctly on previously un-
seen data [25], which can be reflected by the flatness of the
loss landscape [14, 18]. Flatness is the landscape’s local
curvature, which is costly to direct calculate by the Hessian.
Considering the setting of multiple students in OKD, we uti-
lize the theory of multi-model fusion in parameter space [9]
to estimate the local curvature by the linear combination
of students’ parameters (we call it a hybrid-weight model ,
which is expressed as HWM). More specifically, HWM is
a stochastic convex combination of parameters of multiple
students on different data augmentations, which can sample
multiple local points on the landscape. Intuitively, HWM’s
loss reflects the upper and lower bounds of the local region’s
loss and estimates the curvature of the landscape. Minimiz-
ing HWM’s loss flattens the region and forms a landscape
with smaller curvature.
Based on the above observation, we propose a concise
and effective OKD framework, termed online knowledge
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20176
distillation with parameter hybridization (OKDPH), to pro-
mote flatter loss minima and achieve higher generaliza-
tion. We devise a novel loss function for students’ train-
ing that incorporates the standard Cross-Entropy (CE) loss
and Kullback-Leibler (KL) divergence loss, but also a su-
pervised learning loss from HWM. Specifically, HWM is
constructed in each batch by linearly weighting multiple
students’ parameters. The classification error of HWM ex-
plicitly measures the flatness of the region around students
on the loss landscape, reflecting their stability and general-
ization. The proposed loss equals imposing stronger con-
straints on the landscape, guiding students to converge in a
more stable direction. For intuitive understanding, we vi-
sualize the loss landscape of students obtained by different
methods in Fig. 1. Our students converge to one broader
and flatter basin (thus superior generalization performance),
while the students obtained by DML [44] converge to differ-
ent sharp basins, degrading the robustness and performance.
Unfortunately, directly hybridizing students’ parameters
can easily lead to the collapse of HWM due to the high
nonlinearity of deep neural networks [26, 31]. Therefore,
we restrict the differences between students through inter-
mittent fusion operations to ensure the high similarity of
multi-model parameters and achieve effective construction
of HWM. Concretely, at regular intervals, we hybridize
the parameter of HWM with each student and, conversely,
assign the hybrid parameter to the corresponding student.
This process shortens the distance between students, shown
as very close loss trajectories of our students in Fig. 1. How-
ever, it will not reduce diversity because students receive
different types of data augmentation, and they can easily
become diverse during training. Our method pulls students
in the same direction, plays the role of strong regularization,
and obtains one lightweight parameter that performs well in
various scenarios. The solution derived from our method
is expected to integrate the dark knowledge from multiple
models while maintaining a compact architecture and can
be competent for resource-constrained applications.
To sum up, our contributions are organized as follows:
• Inspired by the theory of multi-model fusion, we inno-
vatively extend traditional weight averaging to an on-
the-fly stochastic convex combination of students’ pa-
rameters, called a hybrid-weight model (HWM). The
supervision loss of HWM can estimate the curvature of
the loss landscape around students and explicitly mea-
sure the generalization.
• We propose a brand-new extensible and pow-
erful OKD framework via parameter hybridiza-
tion (OKDPH) for loss minima flattening, which flex-
ibly adapts to various network architectures without
modifying peers’ structures and extra modules. It is
the first OKD work that manipulates parameters.
−30 −20 −10 0 10 20 30
P arameter1−20−100102030P arameter2Ours-S1
Ours-S2
DML-S1
DML-S2
0.00.40.81.21.62.02.42.83.23.6
LossFigure 1. The loss landscape visualization of four students (Ours-
S1 and Ours-S2 are obtained by our method, and DML obtains
DML-S1 and DML-S2), which are ResNet32 [11] trained by the
same settings on CIFAR-10 [20]. Four students start from the ini-
tial point (Red points in the center) and converge to three basins
along different trajectories. The x-axis and y-axis represent the
values of model parameters that PCA [23] obtains.
• Extensive experiments on various backbones demon-
strate that our OKDPH can considerably improve the
students’ generalization and exceed the state-of-the-
art (SOTA) OKD methods and SOTA approaches of
seeking flat minima. Further loss landscape visualiza-
tion and stability analysis verify that our solution lo-
cates in the region having uniformly low loss and is
more robust to perturbations and limited data.
|
Yang_Visual_Recognition-Driven_Image_Restoration_for_Multiple_Degradation_With_Intrinsic_Semantics_CVPR_2023 | Abstract
Deep image recognition models suffer a significant per-
formance drop when applied to low-quality images sincethey are trained on high-quality images. Although manystudies have investigated to solve the issue through imagerestoration or domain adaptation, the former focuses on vi-
sual quality rather than recognition quality, while the lat-
ter requires semantic annotations for task-specific training.In this paper , to address more practical scenarios, we pro-pose a Visual Recognition-Driven Image Restoration net-work for multiple degradation, dubbed VRD-IR, to recoverhigh-quality images from various unknown corruption types
from the perspective of visual recognition within one model.Concretely, we harmonize the semantic representations of
diverse degraded images into a unified space in a dynamicmanner , and then optimize them towards intrinsic semanticsrecovery. Moreover , a prior-ascribing optimization strat-egy is introduced to encourage VRD-IR to couple with var-ious downstream recognition tasks better . Our VRD-IR iscorruption- and recognition-agnostic, and can be insertedinto various recognition tasks directly as an image enhance-ment module. Extensive experiments on multiple image dis-tortions demonstrate that our VRD-IR surpasses existing
image restoration methods and show superior performanceon diverse high-level tasks, including classification, detec-tion, and person re-identification.
| 1. Introduction
We have witnessed the remarkable success made by deep
learning in image recognition tasks in recent years, such asclassification [ 25,33,66], detection [ 23,46,62,68], and seg-
mentation [ 9,51]. However, most of these approaches lever-
age the public datasets with high-quality images ( e.g., Ima-
*Equal contribution
†Corresponding Author515253545556575
12 13 14 15 16 17 18 19 20CUB Top-1 Accuracy on VGG16 (%)
PSNR (dB)VRD-IR
AODNet
EPRNHazy Image
DehazeNetFDGANAirNetMPRNetClean Image
DDPURIE
Figure 1. Illustration of visual quality and recognition quality us-
ing different dehazing methods on hazy CUB [ 73]. The top-1 ac-
curacy is evaluated by VGG16 [ 66] pre-trained on clean CUB. Our
method, VRD-IR, is shown in bold. As we can see, higher visualquality doesn’t mean higher recognition quality.
geNet [ 64], CoCo [ 47]) for training, and they suffer a signif-
icant drop when applied to low-quality images ( e.g., hazy,
rainy, and noisy), since the statistical properties of pixelsare ruined by image degradation [ 75].
An intuitive approach to tackle this issue is to restore the
distorted images first, and then feed them into the succeed-ing recognition models. With this line, various image en-hancement methods have been developed to improve the hu-
man visual quality of corrupted images [ 15,90]. However,
the visual quality and the recognition quality of an imagediffer fundamentally from one another. As shown in Fig. 1,
the restored image with higher visual effect cannot guaran-tee satisfactory performance on downstream high-level vi-sion tasks [ 57,67,81].
Another feasible solution is to encourage the recognition
models to learn corruption-invariant feature representations,which can be applied to low-quality images directly withoutimage recovery. For that purpose, numerous datasets have
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14059
been created [ 26,50,80]. One common method is to narrow
the distribution distance between low- and high- quality im-ages in feature space [ 32,37,67,81]. While promising, most
of these methods neglect the fact that the adverse impacts ofdifferent degradation are quite different on semantic level.On the other hand, they either assume that the task-specificannotations is available during training, or just could han-dle a single corruption/recognition task only, which hindersthe timely adaptation to changing external environment andadjustment to flexible high-level tasks in real-world.
In this paper, we propose a visual recognition-driven im-
age restoration (VRD-IR) for multiple degradation, to re-cover the recognition-friendly high-quality image from itsgiven degraded version without knowing the specific degra-dation type and downstream high-level tasks. We first har-monize the semantic features suffered from different degra-dation into a unified representation spaces, and then opti-mize them towards semantic recovery. Specifically, we de-sign a model paradigm: Intrinsic Semantics Enhancement(ISE), which can restore different degraded semantic repre-sentations in a dynamic manner. It consists of a DegradationNormalization and Compensation (DNC) module for map-ping different degraded features to a degradation-invariantspace, and a Fourier Guided Modulation (FGM) for guid-ing the feature enhancement from the statistical propertiesin amplitude spectrum. For better perception of different
semantics, a prior-ascribing optimization strategy is pro-
posed. A semantic aware decoder (SAD) is first pre-trainedon both low- and high- quality images with the objective toreconstruct the high-quality image from the correspondingsemantic features. To make full use of semantic informationand provide good guidance for ISE, a similarity ranking loss
is enforced during the pre-training of SAD. Then, we fix thepre-trained SAD and force the ISE to improve the qualityof images reconstructed by SAD through enhancing the de-
graded semantic representations. In this way, we encouragethe ISE to modulate the degraded input features from the
perspective of machine vision.
Moreover, the proposed VRD-IR can be plugged into
pre-trained recognition models directly as a data enhance-ment module. Compared with feature distillation-based
methods that require task-specific annotations for training,our VRD-IR enjoys better flexibility and practicality.
We summarize our main contributions as follows:
• To the best of our knowledge, VRD-IR is the first
attempt towards a pure universal image restorationframework for high-level vision. As the VRD-IR canbe integrated with various recognition models directly,it is more practical in real world scenario.
• Considering the adverse impacts of different degrada-
tion in semantics, we design an Intrinsic Semantic En-hancement (ISE) module to modulate the degraded se-mantic representation in a dynamic manner.
• A prior-ascribing optimization strategy is proposed to
endow VRD-IR with capability to perceive degrada-tion effects on semantic level. Guided by this, our ISEcan modulate degraded features from the perspectiveof machine vision.
• We verify the effectiveness of our framework on di-
verse high-level vision tasks, including classification,detection, and person re-identification. Experimentsresults show the superiority of our method in recogni-tion tasks under multiple degradation.
|
Zhang_Learning_Neural_Proto-Face_Field_for_Disentangled_3D_Face_Modeling_in_CVPR_2023 | Abstract
Generative models show good potential for recovering
3D faces beyond limited shape assumptions. While plau-sible details and resolutions are achieved, these modelseasily fail under extreme conditions of pose, shadow orappearance, due to the entangled fitting or lack of multi-
view priors. To address this problem, this paper presents
a novel Neural Proto-face Field (NPF) for unsupervisedrobust 3D face modeling. Instead of using constrainedimages as Neural Radiance Field (NeRF), NPF disentan-
gles the common/specific facial cues, i.e., ID, expression
and scene-specific details from in-the-wild photo collec-
tions. Specifically, NPF learns a face prototype to aggregate3D-consistent identity via uncertainty modeling, extract-
ing multi-image priors from a photo collection. NPF thenlearns to deform the prototype with the appropriate facialexpressions, constrained by a loss of expression consistency
and personal idiosyncrasies. Finally, NPF is optimized to fit
a target image in the collection, recovering specific detailsof appearance and geometry. In this way, the generative
model benefits from multi-image priors and meaningful fa-
cial structures. Extensive experiments on benchmarks showthat NPF recovers superior or competitive facial shapes
and textures, compared to state-of-the-art methods.
| 1. Introduction
3D face reconstruction is a long-standing problem with
applications including games, digital human and mobile
photography. It is ill-posed in many cases requiring strong
assumptions e.g., shape from shading [ 99]. With the 3D
Morphable Model (3DMM) [ 10] proposed, such a problem
can be solved by fitting parameters to the target faces [ 67,
68,107]. Recently, deep-learning methods [ 22,25,43,64,
*Chengjie Wang and Ying Tai are corresponding authors5RWDWLRQ(*'
3 7 ,
+HDG
1H5)
2XUV
'HIRUPDWLRQ
E
*HRPHWU\
2XUV /$3 '')5 D
Figure 1. (a) Comparison with graphics-renderer-based methods
LAP [ 100] and D3DFR [ 20]. Our method models geometry de-
tails and photo-realistic texture. (b) Results of neural rendering
methods EG3D [ 13] + PTI [ 66], HeadNeRF [ 34] and our method.
Our method produces high-quality geometry, robust texture mod-eling under rotation and deformation.
105] are proposed to regress 3DMM parameters from in-
put images. These approaches are then improved by non-linear modeling [ 24,29,79,81,84,94] and multi-view con-
sistency [ 7,15,76,90]. Besides 3DMM methods, recent ef-
forts [ 91,100] attempt to model 3D face without shape as-
sumptions. These non-parametric methods have potential
ability to improve the modeling quality beyond 3DMM.
Although the aforementioned methods achieve impres-
sive performance, they also have obvious drawbacks. On
the one hand, as the parametric models are usually builtfrom a small amount of subjects (e.g., BFM [ 58] with 200
subjects) and rigidly controlled conditions, they may befragile to large variations of identity [ 106], and have limi-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
382
tations on building teeth, skin details or anatomic grounded
muscles [ 23]. On the other hand, all of these methods de-
pend on graphics renderers [ 42,44,46] in the analysis-by-
synthesis fitting procedure, and thus yields hand-crafted ap-proximation or ill-posed decomposition on intrinsic clues.
Hence, as illustrated in Fig. 1-(a), these methods struggle to
produce photo-realistic texture or geometric details.
Against these limitations, efforts are made to use a neu-
ral renderer such as StyleGAN [ 38,39] to model faces by
inverting the corresponding images [ 1,2] intoWspace. Ex-
isting methods [ 11,18,59,62,92] mainly learn to embed
3DMM coefficients to implicitly leverage 3D clues, but theyhave difficulty achieving precise 3D controls due to their en-tangled image formation. To disentangle neural rendering,recent works [ 13,14,34,54] employ explicit 3D pipelines,
e.g., Neural Radiance Field (NeRF) [ 52] into the Style-
GANs’ framework, so that face shapes and camera views
can be extracted. In this way, precise 3D controls and de-tailed geometry can be obtained. However, these methods
still show fragile performance under challenging conditions
as shown in Fig. 1-(b). When confronting large poses, ex-
treme appearance or lighting, the lack of facial priors dis-
turbs the reconstruction and results in severe distortions.
This is due to the essentially overfitting objective of invert-ing single target image, where the geometry ambiguity isunavoidable.
On top of this, one solution is to leverage reliable pri-
ors, e.g., multi-image consistency as a complement. While
NeRF provides a natural paradigm to dig multi-view cues, it
requires fully constrained images that are difficult to obtain.
Even conditioned by style codes [ 13,14,54], there is no di-
rect way to build 3D faces from unconstrained portrait col-
lections in such a neural rendering mechanism. In this work,we present a novel Neural Proto-face Field (NPF) for un-
supervised robust 3D face modeling, where ID, expressionand scene-specific details can be disentangled from in-the-
wild photo collections. To aggregate ID-aware cues, NPF
leverages uncertainty modeling to extract multi-image pri-ors and recovers a face prototype with ID-consistent faceshape. To disentangle the expression, NPF then learns
appropriate representations to deform the prototype, con-
strained by a expression consistency loss. In this way, thelearned face shape is properly blended to avoid geometric
ambiguity. Finally, to recover the scene-specific details,NPF is optimized to fit a target image in the collection. The
robustness of fitting is guaranteed by a geometry and ap-pearance regularization. As shown in Fig. 1-(b), NPF makes
the generative method benefit from multi-image priors inunconstrained environments, and produces high-quality 3Dfaces under challenging conditions.
In summary, our contributions are as follows:
1)A novel Neural Proto-face Field (NPF) is proposed to
disentangle ID, expression and specific details from 3D faceMethods Rendering Pipeline Multi-view
EMOCA [81], DECA [ 24], Unsup3D [ 91] Graphics Disentangled ×
LAP [100], FML [ 76], MVF [ 90] Graphics Disentangled /check
DFG [18], StyleRig [ 77], StyleFlow [ 3] Neural Entangled ×
Pi-GAN [14], StyleSDF [ 54], EG3D [ 13] Neural Disentangled ×
Ours Neural Disentangled /check
Table 1. Discussion with selected existing methods.
modeling, which uses in-the-wild photo collections to ben-
efit the 3D generative model under challenging conditions.
2)With a novel face prototype aggregation method, NPF
integrates multi-image face priors against the large varia-tions in unconstrained environments.
3)With a series of novel consistency losses, NPF is well
fit to specific scenes with personalized details, based on theguidance of face prototypes.
|
Yi_Weakly-Supervised_Single-View_Image_Relighting_CVPR_2023 | Abstract
We present a learning-based approach to relight a sin-
gle image of Lambertian and low-frequency specular ob-
jects. Our method enables inserting objects from pho-
tographs into new scenes and relighting them under the
new environment lighting, which is essential for AR appli-
cations. To relight the object, we solve both inverse ren-
dering and re-rendering. To resolve the ill-posed inverse
rendering, we propose a weakly-supervised method by a
low-rank constraint. To facilitate the weakly-supervised
training, we contribute Relit, a large-scale (750K images)
dataset of videos with aligned objects under changing il-
luminations. For re-rendering, we propose a differen-
tiable specular rendering layer to render low-frequency
non-Lambertian materials under various illuminations of
spherical harmonics. The whole pipeline is end-to-end and
efficient, allowing for a mobile app implementation of AR
object insertion. Extensive evaluations demonstrate that
our method achieves state-of-the-art performance. Project
page: https://renjiaoyi.github.io/relighting/.
| 1. Introduction
Object insertion finds extensive applications in Mobile
AR. Existing AR object insertions require a perfect mesh
of the object being inserted. Mesh models are typically
built by professionals and are not easily accessible to am-
ateur users. Therefore, in most existing AR apps such as
SnapChat and Ikea Place, users can use only built-in vir-
tual objects for scene augmentation. This may greatly limit
user experience. A more appealing setting is to allow the
user to extract objects from a photograph and insert them
into the target scene with proper lighting effects. This calls
for a method of inverse rendering and relighting based on a
single image, which has so far been a key challenge in the
graphics and vision fields.
Relighting real objects requires recovering lighting, ge-
ometry and materials which are intertwined in the observed
image; it involves solving two problems, inverse render-
*Co-first authors.
†Corresponding author: [email protected].
Input pho to Non- Lambertian object relighting
Figure 1. Our method relights real objects into new scenes from
single images, which also enables editing materials from diffuse
to glossy with non-Lambertian rendering layers.
ing [17] and re-rendering. Furthermore, to achieve real-
istic results, the method needs to be applicable for non-
Lambertian objects. In this paper, we propose a pipeline
to solve both problems, weakly-supervised inverse render-
ing and non-Lambertian differentiable rendering for Lam-
bertian and low-frequency specular objects.
Inverse rendering is a highly ill-posed problem, with sev-
eral unknowns to be estimated from a single image. Deep
learning methods excel at learning strong priors for reduc-
ing ill-posedness. However, this comes at the cost of a large
amount of labeled training data, which is especially cum-
bersome to prepare for inverse rendering since ground truths
of large-scale real data are impossible to obtain. Synthetic
training data brings the problem of domain transfer. Some
methods explore self-supervised pipelines and acquire ge-
ometry supervisions of real data from 3D reconstruction by
multi-view stereo (MVS) [34, 35]. Such approaches, how-
ever, have difficulties in handling textureless objects.
To tackle the challenge of training data shortage, we pro-
pose a weakly-supervised inverse rendering pipeline based
on a novel low-rank loss and a re-rendering loss. For low-
rank loss, a base observation here is that the material re-
flectance is invariant to illumination change, as an intrin-
sic property of an object. We derive a low-rank loss for
inverse rendering optimization which imposes that the re-
flectance maps of the same object under changing illumi-
nations are linearly correlated . In particular, we constrain
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8402
...
Encoder Decoder
Skip connections
Encoder
Lighting coefficients+ MLP
Skip connections
...Shape normal
/
Relit DatasetDifferentiable Render LayerTraining batch
Rendered shading Reflectance
Singular reflectanceNormal -Net
Light -NetLowrank Constraint Module
Self-supervision
Normal -
Net
Light -
Net+Render Layer
Novel lightings/material
Render Layer
Inverse rendering decompositions
+
Training
Relighting
Spec -
Net
Spec -NetDiffuse
branc h
Specular
parameters
Figure 2. Overview of our method. At training time, Spec-Net separates input images into specular and diffuse branches. Spec-Net,
Normal-Net and Light-Net are trained in a self-supervised manner by the Relit dataset. At inference time, inverse rendering properties are
predicted to relight the object under novel lighting and material. The non-Lambertian render layers produce realistic relit images.
the reflectance matrix with each row storing one of the re-
flectance maps to be rank one. This is achieved by minimiz-
ing a low-rank loss defined as the Frobenius norm between
the reflectance matrix and its rank-one approximation. We
prove the convergence of this low-rank loss. In contrast,
traditional Euclidean losses lack a convergence guarantee.
To facilitate the learning, we contribute Relit, a large-
scale dataset of videos of real-world objects with changing
illuminations. We design an easy-to-deploy capturing sys-
tem: a camera faces toward an object, both placed on top
of a turntable. Rotating the turntable will produce a video
with the foreground object staying still and the illumination
changing. To extract the foreground object from the video,
manual segmentation of the first frame suffices since the ob-
ject is aligned across all frames.
As shown in Figure 2, a fixed number of images under
different lighting are randomly selected as a batch. We first
devise a Spec-Net to factorize the specular highlight, trained
by the low-rank loss on the chromaticity maps of diffuse im-
ages (image subtracts highlight) which should be consistent
within the batch. With the factorized highlight, we further
predict the shininess and specular reflectance, which is self-
supervised with the re-rendering loss of specular highlight.
For the diffuse branch, we design two networks, Normal-
Net and Light-Net, to decompose the diffuse component
by predicting normal maps and spherical harmonic lighting
coefficients, respectively. The diffuse shading is rendered
by normal and lighting, and diffuse reflectance (albedo) is
computed by diffuse image and shading. Both networks aretrained by low-rank loss on diffuse reflectance.
Regarding the re-rendering phase, the main difficulty is
the missing 3D information of the object given a single-
view image. The Normal-Net produces a normal map which
is a partial 3D representation, making the neural rendering
techniques and commercial renderers inapplicable. The ex-
isting diffuse rendering layer for normal maps of [20] can-
not produce specular highlights. Pytorch3D and [9,11] ren-
der specular highlights for point lights only.
To this end, we design a differentiable specular renderer
from normal maps, based on the Blinn-Phong specular re-
flection [5] and spherical harmonic lighting [6]. Combining
with the differentiable diffuse renderer, we can render low-
frequency non-Lambertian objects with prescribed parame-
ters under various illuminations, and do material editing as
a byproduct.
We have developed an Android app based on our method
which allows amateur users to insert and relight arbitrary
objects extracted from photographs in a target scene. Exten-
sive evaluations on inverse rendering and image relighting
demonstrate the state-of-the-art performance of our method.
Our contributions include:
• A weakly-supervised inverse rendering pipeline
trained with a low-rank loss. The correctness and con-
vergence of the loss are mathematically proven.
• A large-scale dataset of foreground-aligned videos col-
lecting 750Kimages of 100+ real objects under differ-
ent lighting conditions.
8403
• An Android app implementation for amateur users to
make a home run.
|
Zhang_Painting_3D_Nature_in_2D_View_Synthesis_of_Natural_Scenes_CVPR_2023 | Abstract
We introduce a novel approach that takes a single seman-
tic mask as input to synthesize multi-view consistent color
images of natural scenes, trained with a collection of single
images from the Internet. Prior works on 3D-aware image
synthesis either require multi-view supervision or learning
category-level prior for specific classes of objects, which
are inapplicable to natural scenes. Our key idea to solve
this challenge is to use a semantic field as the intermedi-
ate representation, which is easier to reconstruct from an
input semantic mask and then translated to a radiance field
with the assistance of off-the-shelf semantic image synthe-
sis models. Experiments show that our method outperforms
baseline methods and produces photorealistic and multi-
view consistent videos of a variety of natural scenes. The
∗Affiliated with the State Key Lab of CAD&CG, Zhejiang University.
†Corresponding author: Xiaowei Zhou.project website is https://zju3dv.github.io/paintingnature/.
| 1. Introduction
Natural scenes are indispensable content in many appli-
cations such as film production and video games. This work
focuses on a specific setting of synthesizing novel views
of natural scenes given a single semantic mask, which en-
ables us to generate 3D contents by editing 2D semantic
masks. With the development of deep generative models,
2D semantic image synthesis methods [24, 46, 61, 66] have
achieved impressive advances. However, they do not con-
sider the underlying 3D structure and cannot generate multi-
view consistent free-viewpoint videos.
To address this problem, a straightforward approach
is first utilizing semantics-driven image generator like
SPADE [46] to synthesize an image from the input semantic
mask and then predicting novel views based on the gener-
ated image. Although the existing single-view view syn-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8518
thesis methods [31, 34, 45, 52, 67, 70] achieve impressive
rendering results, they typically require training networks
on posed multi-view images. Compared to urban or indoor
scenes, learning to synthesize natural scenes is a challeng-
ing task as it is difficult to collect 3D data or posed videos of
natural scenes for training, as demonstrated in [32], making
the aforementioned methods not applicable. AdaMPI [17]
designs a training strategy to learn the view synthesis net-
work on single-view image collections. It warps images to
random novel views and warps them back to the original
view. An inpainting network is trained to fill the holes in
disocclusion regions to match the original images. After
training, the inpainting network is used to generate pseudo
multi-view images for training a view synthesis network.
Our experimental results in Section 4.5 show that the in-
painting network struggles to output high-quality image
contents in missing regions under large viewpoint changes,
thus limiting the rendering quality.
In this paper, we propose a novel framework for
semantics-guided view synthesis of natural scenes by learn-
ing prior from single-view image collections. Based on the
observation that semantic masks have much lower complex-
ity than images, we divide this task into two simpler sub-
problems: we first generate semantic masks at novel views
and then translate them to RGB images through SPADE.
For view synthesis of semantic masks, the input semantic
mask is first translated to a color image by SPADE, and a
depth map is predicted from the color image by a depth es-
timator [50]. Then, the input semantic mask is warped to
novel views using the predicted depth map and refined by
an inpainting network trained by a self-supervised learning
strategy on single-view image collections. Our experiments
show that, in contrast to images, the novel view synthesis of
semantic masks is much easier to learn by the network.
It is observed that semantic masks generated by the in-
painting network tend to be view-inconsistent. As a result,
SPADE could generate quite different contents in these re-
gions even when the inconsistency is minor between the se-
mantic masks. Fig. 4 presents two examples. To solve this
issue, we learn a neural semantic field to fuse and denoise
these semantic masks for better multi-view consistency. Fi-
nally, we translate the multi-view semantic masks to color
images by SPADE and reconstruct a neural scene represen-
tation for view-consistent rendering.
Extensive experiments are conducted on the LHQ
dataset [58], a widely-used benchmark dataset for semantic
image synthesis. The results demonstrate that our approach
significantly outperforms baseline methods both qualita-
tively and quantitatively. We also show that by editing the
input semantic mask, our approach is capable of generating
various high-quality rendering results of natural scenes, as
shown in Fig. 1. |
Zhang_Exploiting_Completeness_and_Uncertainty_of_Pseudo_Labels_for_Weakly_Supervised_CVPR_2023 | Abstract
Weakly supervised video anomaly detection aims to iden-
tify abnormal events in videos using only video-level labels.
Recently, two-stage self-training methods have achieved
significant improvements by self-generating pseudo labels
and self-refining anomaly scores with these labels. As the
pseudo labels play a crucial role, we propose an enhance-
ment framework by exploiting completeness and uncertainty
properties for effective self-training. Specifically, we first
design a multi-head classification module (each head serves
as a classifier) with a diversity loss to maximize the distri-
bution differences of predicted pseudo labels across heads.
This encourages the generated pseudo labels to cover as
many abnormal events as possible. We then devise an it-
erative uncertainty pseudo label refinement strategy, which
improves not only the initial pseudo labels but also the up-
dated ones obtained by the desired classifier in the sec-
ond stage. Extensive experimental results demonstrate the
proposed method performs favorably against state-of-the-
art approaches on the UCF-Crime, TAD, and XD-Violence
benchmark datasets.
| 1. Introduction
Automatically detecting abnormal events in videos has
attracted increasing attention for its broad applications in
intelligent surveillance systems. Since abnormal events are
sparse in videos, recent studies are mainly developed within
the weakly supervised learning framework [5,12,19,25,27,
*Corresponding author.
(b)(c)
(a)
Figure 1. Illustration of the completeness: (a) represents a video
that contains multiple abnormal clips (ground truth anomalies are
in the orange area). Existing methods tend to focus on the most
anomalous clip as shown in (b). We propose to use the multi-head
classification module together with a diversity loss to encourage
pseudo labels to cover the complete abnormal events as depicted
in (c).
29, 32, 34, 37–41], where only video-level annotations are
available. However, the goal of anomaly detection is to pre-
dict frame-level anomaly scores during test. This results in
great challenges for weakly supervised video anomaly de-
tection.
Existing methods broadly fall into two categories: one-
stage methods based on Multiple Instance Learning (MIL)
and two-stage self-training methods. One-stage MIL-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16271
methods [19, 27, 29, 39, 41] treat each normal and abnor-
mal video as a negative and positive bag respectively, and
clips of a video are the instances of a bag. Formulating
anomaly detection as a regression problem, these methods
adopt ranking loss to encourage the highest anomaly score
in a positive bag to be higher than that in a negative bag.
Due to the lack of clip-level annotations, the anomaly scores
generated by MIL-based methods are usually less accurate.
To alleviate this problem, two-stage self-training methods
are proposed [5, 12]. In the first stage, pseudo labels for
clips are generated by MIL-based methods. In the second
stage, MIST [5] utilizes these pseudo labels to refine dis-
criminative representations. In contrast, MSL [12] refines
the pseudo labels via a transformer-based network. Despite
progress, existing methods still suffer two limitations. First,
the ranking loss used in the pseudo label generator ignores
the completeness of abnormal events. The reason is that a
positive bag may contain multiple abnormal clips as shown
in Figure 1, but MIL is designed to detect only the most
likely one. The second limitation is that the uncertainty of
generated pseudo labels is not taken into account in the sec-
ond stage. As the pseudo labels are usually noisy, directly
using them to train the final classifier may hamper its per-
formance.
To address these problems, we propose to enhance
pseudo labels via exploiting completeness and uncertainty
properties. Specifically, to encourage the complete detec-
tion of abnormal events, we propose a multi-head module
to generate pseudo labels (each head serves as a classifier)
and introduce a diversity loss to ensure the distribution dif-
ference of pseudo labels generated by the multiple classi-
fication heads. In this way, each head tends to discover a
different abnormal event, and thus the pseudo label genera-
tor covers as many abnormal events as possible. Then, in-
stead of directly training a final classifier with all pseudo la-
bels, we design an iterative uncertainty-based training strat-
egy. We measure the uncertainty using Monte Carlo (MC)
Dropout [6] and only clips with lower uncertainty are used
to train the final classifier. At the first iteration, we use
such uncertainty to refine pseudo labels obtained in the first
stage, and in the remaining iterations, we use it to refine the
output of the desired final classifier.
The main contributions of this paper are as follows:
• We design a multi-head classifier scheme together with
a diversity loss to encourage the pseudo labels to cover
as many abnormal clips as possible.
• We design an iterative uncertainty aware self-training
strategy to gradually improve the quality of pseudo la-
bels.
• Experiments on UCF-Crime, TAD, and XD-Violence
datasets demonstrate the favorable performance com-
pared to several state-of-the-art methods. |
Ye_NEF_Neural_Edge_Fields_for_3D_Parametric_Curve_Reconstruction_From_CVPR_2023 | Abstract
We study the problem of reconstructing 3D feature
curves of an object from a set of calibrated multi-view im-
ages. To do so, we learn a neural implicit field repre-
senting the density distribution of 3D edges which we re-
fer to as Neural Edge Field (NEF). Inspired by NeRF [20],
NEF is optimized with a view-based rendering loss where
a 2D edge map is rendered at a given view and is com-
pared to the ground-truth edge map extracted from the im-
age of that view. The rendering-based differentiable opti-
mization of NEF fully exploits 2D edge detection, without
needing a supervision of 3D edges, a 3D geometric oper-
ator or cross-view edge correspondence. Several technical
designs are devised to ensure learning a range-limited and
view-independent NEF for robust edge extraction. The fi-
nal parametric 3D curves are extracted from NEF with an
iterative optimization method. On our benchmark with syn-
thetic data, we demonstrate that NEF outperforms exist-
ing state-of-the-art methods on all metrics. Project page:
https://yunfan1202.github.io/NEF/.
| 1. Introduction
Feature curves “define” 3D shapes to an extent, not
only geometrically (surface reconstruction from curve net-
works [15, 16]) but also perceptually (feature curve based
shape perception [4, 35]). Therefore, feature curve extrac-
*Corresponding author.tion has been a long-standing problem in both graphics and
vision. Traditional approaches to 3D curve extraction often
work directly on 3D shapes represented by, e.g., polygo-
nal meshes or point clouds. Such approaches come with a
major difficulty: Sharp edges may be partly broken or com-
pletely missed due to imperfect 3D acquisition and/or re-
construction. Consequently, geometrically-based methods,
even the state-of-the-art ones, are sensitive to parameter set-
tings and error-prune near rounded edges, noise, and sparse
data. Recently, learning-based methods are proposed to ad-
dress these issues but with limited generality [18,19,33,39].
In many cases, edges are visually prominent and easy to
detect in the 2D images of a 3D shape. To resolve occlusion,
one may think of 3D curve reconstruction from multi-view
edges. This solution, however, relies strongly on cross-view
edge correspondence which itself is a highly difficult prob-
lem [28]. This explains why there is rarely a work on multi-
view curve reconstruction even in the deep learning era. We
ask this question: Can we learn 3D feature curve extraction
directly from the input of multi-view images?
In this work, we try to answer the question through learn-
ing a neural implicit field representing the density distribu-
tion of 3D edges from a set of calibrated multi-view im-
ages, inspired by the recent success of neural radiance field
(NeRF) [20]. We refer to this edge density field as Neu-
ral Edge Field (NEF). Similar to NeRF, NEF is optimized
with a view-based rendering loss where a 2D edge map is
rendered at a given view and is compared to the ground-
truth edge map extracted from the image of that view. The
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8486
volumetric rendering is based on edge density and color
(gray-scale) predicted by MLPs along viewing rays. Dif-
ferent from NeRF, however, our goal is merely to optimize
the NEF which is later used for extracting parametric 3D
curves; no novel view synthesis is involved. The rendering-
based differentiable optimization of 3D edge density fully
exploits 2D edge detection, without needing a 3D geomet-
ric operator or cross-view edge correspondence. The latter
is implicitly learned with multi-view consistency.
Directly optimizing NEF as NeRF-like density is prob-
lematic since the range of density can be arbitrarily large
and different from scene to scene, and it is hard to select
a proper threshold to extract useful geometric shapes (e.g.,
3D surfaces for NeRF and 3D edges for NEF). Moreover,
NeRF density usually does not approximate the underlying
3D shape well due to noise. Therefore, we seek for confin-
ing the edge density in the range of [0;1]through learning
a mapping function with a learnable scaling factor to map
the edge density to the actual NEF density. By doing so, we
can easily choose a threshold to extract edges robustly from
the optimized edge density.
Another issue with NEF optimization is the incompatible
visibility of the edge density field and the edges detected in
images. While the former is basically a wireframe represen-
tation of the underlying 3D shape and all edges are visible
from any view (i.e., no self-occlusion), edges in 2D images
can be occluded by the object itself. This leads to inconsis-
tent supervisions of different views with different visibility
and may cause false negative: An edge that should have
been present in NEF according to one view visible to the
edge may be suppressed by other views invisible. To ad-
dress this issue, we opt to 1) impose consistency between
density and color in NEF and 2) give less punishment on
non-edge pixels in the rendering loss, to allow the NEF to
keep all edges seen from all views. This essentially makes
NEF view-independent which is reasonable.
Having obtained the edge density, we fit parametric
curves by treating the 3D density volume as a point cloud of
edges. We optimize the control points of curves in a coarse-
to-fine manner. Since initialization is highly important to
such a non-convex optimization, we first apply line fitting
in a greedy fashion to cover most points. Based on the ini-
tialization, we then upgrade lines to cubic B ´ezier curves by
adding extra control points and optimize all curves simulta-
neously with an extra endpoint regularization.
We build a benchmark with a synthetic dataset consist-
ing of 115 CAD models with complicated shape structures
from ABC dataset [14] and utilize BlenderProc [7] to ren-
der posed images. Extensive experiments on the proposed
dataset show that NEF, which is self-trained with only 2D
supervisions, outperforms existing state-of-the-art methods
on all metrics. Our contributions include:
• A self-supervised 3D edge detection from multi-view2D edges based neural implicit field optimization.
• Several technical designs to ensure learning a range-
limited and view-independent NEF and an iterative op-
timization strategy to reconstruct parametric curves.
• A benchmark for evaluating and comparing various
edge/curve extraction methods.
|
Yu_CelebV-Text_A_Large-Scale_Facial_Text-Video_Dataset_CVPR_2023 | Abstract
Text-driven generation models are flourishing in video
generation and editing. However, face-centric text-to-video
generation remains a challenge due to the lack of a suitable
dataset containing high-quality videos and highly relevant
texts. This paper presents CelebV-Text , a large-scale, di-
verse, and high-quality dataset of facial text-video pairs, to
facilitate research on facial text-to-video generation tasks.
CelebV-Text comprises 70,000 in-the-wild face video clips
with diverse visual content, each paired with 20 texts gen-
erated using the proposed semi-automatic text generation
strategy. The provided texts are of high quality, describ-
ing both static and dynamic attributes precisely. The supe-
riority of CelebV-Text over other datasets is demonstrated
via comprehensive statistical analysis of the videos, texts,
and text-video relevance. The effectiveness and potential
of CelebV-Text are further shown through extensive self-
evaluation. A benchmark is constructed with representa-
tive methods to standardize the evaluation of the facial text-
to-video generation task. All data and models are publicly
available1.
*Equal contribution.
1Project page: https://celebv-text.github.io | 1. Introduction
Text-driven video generation has recently garnered sig-
nificant attention in the fields of computer vision and com-
puter graphics. By using text as input, video content can be
generated and controlled, inspiring numerous applications
in both academia and industry [5,34,43,47]. However, text-
to-video generation still faces many challenges, particularly
in the face-centric scenario where generated video frames
often lack quality [18, 34, 37] or have weak relevance to in-
put texts [2,4,39,67]. We believe that one of the main issues
is the absence of a well-suited facial text-video dataset con-
taining high-quality video samples and text descriptions of
various attributes highly relevant to videos.
Constructing a high-quality facial text-video dataset
poses several challenges, mainly in three aspects. 1) Data
collection. The quality and quantity of video samples
largely determine the quality of generated videos [11, 45,
48, 60]. However, obtaining such a large-scale dataset with
high-quality samples while maintaining a natural distribu-
tion and smooth video motion is challenging. 2) Data an-
notation. The relevance of text-video pairs needs to be en-
sured. This requires a comprehensive coverage of text for
describing the content and motion appearing in the video,
such as light conditions and head movements. 3) Text gen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14805
eration. Producing diverse and natural texts are non-trivial.
Manual text generation is expensive and not scalable. While
auto-text generation is easily extensible, it is limited in nat-
uralness.
To overcome the challenges mentioned above, we care-
fully design a comprehensive data construction pipeline that
includes data collection and processing, data annotation,
and semi-auto text generation. First, to obtain raw videos,
we follow the data collection steps of CelebV-HQ, which
has proven to be effective in [66]. We introduce a minor
modification to the video processing step to improve the
video’s smoothness further. Next, to ensure highly relevant
text-video pairs, we analyze videos from both temporal dy-
namics and static content and establish a set of attributes
that may or may not change over time. Finally, we propose
a semi-auto template-based method to generate texts that
are diverse and natural. Our approach leverages the advan-
tages of both auto- and manual-text methods. Specifically,
we design a rich variety of grammar templates as [10,52] to
parse annotation and manual texts, which are flexibly com-
bined and modified to achieve high diversity, complexity,
and naturalness.
With the proposed pipeline, we create CelebV-Text ,
a Large-Scale Facial Text-Video Dataset, which includes
70,000in-the-wild video clips with a resolution of at least
512×512and1,400,000text descriptions with 20 for each
clip. As depicted in Figure 1, CelebV-Text consists of high-
quality video samples and text descriptions for realistic face
video generation. Each video is annotated with three types
of static attributes (40 general appearances, 5 detailed ap-
pearances, and 6 light conditions) and three types of dy-
namic attributes (37 actions, 8 emotions, and 6 light direc-
tions). All dynamic attributes are densely annotated with
start and end timestamps, while manual-texts are provided
for labels that cannot be discretized. Furthermore, we have
designed three templates for each attribute type, resulting in
a total of 18 templates that can be flexibly combined. All
attributes and manual-texts are naturally described in our
generated texts.
CelebV-Text surpasses existing face video datasets [11]
in terms of resolution (over 2 times higher), number of sam-
ples, and more diverse distribution. In addition, the texts in
CelebV-Text exhibit higher diversity, complexity, and natu-
ralness than those in text-video datasets [19, 66]. CelebV-
Text also shows high relevance of text-video pairs, validated
by our text-video retrieval experiments [17]. To further ex-
amine the effectiveness and potential of CelebV-Text, we
evaluate it on a representative baseline [19] for facial text-
to-video generation. Our results show better relevance be-
tween generated face videos and texts when compared to
a state-of-the-art large-scale pretrained model [26]. Fur-
thermore, we show that a simple modification of [19] with
text interpolation can significantly improve temporal coher-
ence. Finally, we present a new benchmark for text-to-video
generation to standardize the facial text-to-video generation
task, which includes representative models [5, 19] on three
text-video datasets.
The main contributions of this work are summarized asfollows: 1) We propose CelebV-Text, the first large-scale
facial text-video dataset with high-quality videos, as well
as rich and highly-relevant texts, to facilitate research in fa-
cial text-to-video generation. 2) Comprehensive statistical
analyses are conducted to examine video/text quality and
diversity, as well as text-video relevance, demonstrating the
superiority of CelebV-Text. 3) A series of self-evaluations
are performed to demonstrate the effectiveness and poten-
tial of CelebV-Text. 4) A new benchmark for text-to-video
generation is constructed to promote the standardization of
the facial text-to-video generation task.
|
Zhang_Delivering_Arbitrary-Modal_Semantic_Segmentation_CVPR_2023 | Abstract
Multimodal fusion can make semantic segmentation
more robust. However, fusing an arbitrary number of
modalities remains underexplored. To delve into this prob-
lem, we create the DELIVER arbitrary-modal segmenta-
tion benchmark, covering De pth, Li DAR, multiple V iews,
Events, and R GB. Aside from this, we provide this dataset in
four severe weather conditions as well as five sensor failure
cases to exploit modal complementarity and resolve par-
tial outages. To make this possible, we present the arbi-
trary cross-modal segmentation model CMN EXT. It en-
compasses a Self-Query Hub (SQ-Hub) designed to extract
effective information from any modality for subsequent fu-
sion with the RGB representation and adds only negligible
amounts of parameters ( ∼0.01M) per additional modal-
ity. On top, to efficiently and flexibly harvest discrimina-
tive cues from the auxiliary modalities, we introduce the
simple Parallel Pooling Mixer (PPX) . With extensive experi-
ments on a total of six benchmarks, our CMN EXTachieves
state-of-the-art performance on the DELIVER , KITTI-360,
MFNet, NYU Depth V2, UrbanLF , and MCubeS datasets,
allowing to scale from 1to81modalities. On the freshly
collected DELIVER , the quad-modal CMN EXTreaches
up to66.30% in mIoU with a +9.10% gain as compared to
the mono-modal baseline.1
| 1. Introduction
With the explosion of modular sensors, multimodal fu-
sion for semantic segmentation has progressed rapidly re-
cently [ 5,11,48] and in turn has stirred growing inter-
est to assemble more and more sensors to reach higher
and higher segmentation accuracy aside from more robust
scene understanding. However, most works [ 34,75,103]
and multimodal benchmarks [ 29,61,91] focus on specific
sensor pairs, which lack behind the current trend of fusing
*Equal contribution.
†Corresponding author (e-mail: [email protected] ).
1The D ELIVER dataset and our code will be made publicly available
at:https://jamycheung.github.io/DELIVER.html .
D-E-
-D-
D-E-
5256606468mIoU (%)
(a) RGB-D-E-L fusion.
D-E-
-D-
D-E-
46474849505152mIoU (%)
(b) RGB-A-D-N fusion.
D-E-
-D-
D-E-
7778798081mIoU (%)
(c) RGB-Light Field.
Figure 1. Arbitrary-modal segmentation results of CMNeXt using:
(a).{RGB ,Depth,Event,LiDAR}on our D ELIVER dataset;
(b).{RGB ,Angle of Linear Polarization ( AoLP) ,Degree of Lin-
ear Polarization ( DoLP) ,Near-Infrared ( NIR)}on MCubeS [ 44];
(c).{RGB ,8/33/80 sub-aperture Light Fields (LF8/LF33/LF80)
on UrbanLF-Syn [ 59], respectively.
LiDAR
Event
Depth
RGB
57.2
57.0
49.1
38.3
57.2
57.4
65.37 65.92mIoU (%)
Figure 2. Comparing CMX [ 48], HRFuser [ 4], and our CMNeXt
in sensor failure ( i.e., LiDAR Jitter) on the D ELIVER dataset.
more and more modalities [ 4,70],i.e., progressing towards
Arbitrary-Modal Semantic Segmentation (AMSS).
When looking into AMSS, two observations become ap-
parent. Firstly, an increasing amount of modalities should
provide more diverse complementary information, mono-
tonically increasing segmentation accuracy. This is di-
rectly supported by our results when incrementally adding
and fusing modalities as illustrated in Fig. 1a(RGB-
Depth-Event-LiDAR), Fig. 1b(RGB-AoLP-DoLP-NIR),
and Fig. 1cwhen adding up to 80sub-aperture light-field
modalities (RGB-LF 8/-LF33/-LF80). Unfortunately, this
great potential cannot be uncovered by previous cross-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1136
Hub
distribute
..hub2fusemergeR
D
LER
D
LER
D
LE
(a) Separate (b) Joint (c) Asymmetric
Figure 3. Comparison of multimodal fusion paradigms, such as
(a) merging with separate branches [ 4], (b) distributing with a joint
branch [ 70], and (c) our hub2fuse with asymmetric branches.
modal fusion methods [ 9,77,99], which follow designs for
pre-defined modality combinations. The second observa-
tion is that the cooperation of multiple sensors is expected
to effectively combat individual sensor failures. Most of
the existing works [ 67,72,76] are built on the assump-
tion that each modality is always accurate. Under par-
tial sensor faults, which are common in real-life robotic
systems, e.g. LiDAR Jitter, fusing misaligned sensing data
might even degrade the segmentation performance, as de-
picted with CMX [ 48] and HRFuser [ 4] in Fig. 2. These
two critical observations remain to a large extent neglected.
To address these challenges, we create a benchmark
based on the CARLA simulator [ 19], with Depth,LiDAR,
Views, Events, and RGB images: The DELIVER Multi-
modal dataset. It features severe weather conditions and five
sensor failure modes to exploit complementary modalities
and resolve partial sensor outages. To profit from all this,
we present the arbitrary cross-modal CMNeXt segmenta-
tion model. Without increasing the computation overhead
substantially when adding more modalities CMNeXt incor-
porates a novel Hub2Fuse paradigm (Fig. 3c). Unlike re-
lying on separate branches (Fig. 3a) which tend to be com-
putationally costly or using a single joint branch (Fig. 3b)
which often discards valuable information, CMNeXt is an
asymmetric architecture with two branches, one for RGB
and another for diverse supplementary modalities.
The key challenge lies in designing the two branches
to pick up multimodal cues. Specifically, at the hubstep
ofHub2Fuse , to gather useful complementary informa-
tion from auxiliary modalities, we design a Self-Query
Hub (SQ-Hub), which dynamically selects informative fea-
tures from all modality-sources before fusion with the RGB
branch. Another great benefit of SQ-Hub is the ease of ex-
tending it to an arbitrary number of modalities, at negligible
parameters increase ( ∼0.01Mper modality). At the fusion
step, fusing sparse modalities such as LiDAR or Event data
can be difficult to handle for joint branch architectures with-
out explicit fusion such as TokenFusion [ 70]. To circum-
vent this issue and make best use of both dense and sparse
modalities, we leverage cross-fusion modules [ 48] and cou-
ple them with our proposed Parallel Pooling Mixer (PPX)which efficiently and flexibly harvests the most discrimina-
tive cues from any auxiliary modality. These design choices
come together in our CMNeXt architecture, which paves the
way for AMSS (Fig. 1). By carefully putting together alter-
native modalities, CMNeXt can overcome individual sensor
failures and enhances segmentation robustness (Fig. 2).
With comprehensive experiments on D ELIVER and
five additional public datasets, we gather insight into the
strength of the CMNeXt model. On D ELIVER, CMNeXt
obtains66.30% in mIoU with a +9.10% gain compared
to the RGB-only baseline [ 78]. On UrbanLF-Real [ 59]
and MCubeS [ 44] datasets, CMNeXt surpasses the previ-
ous best methods by +3.90% and+8.68%, respectively.
Compared to previous state-of-the-art methods, our model
achieves comparable perfomance on bi-modal NYU Depth
V2 [61] as well as MFNet [ 29] and outperforms all previous
modality-specific methods on KITTI-360 [ 45].
On a glance, we deliver the following contributions:
• We create the new benchmark D ELIVER for
Arbitrary-Modal Semantic Segmentation (AMSS)
with four modalities, four adverse weather conditions,
and five sensor failure modes.
• We revisit and compare different multimodal fusion
paradigms and present the Hub2Fuse paradigm with
an asymmetric architecture to attain AMSS.
• The universal arbitrary cross-modal fusion model CM-
NeXt is proposed, with a Self-Query Hub (SQ-Hub)
for selecting informative features and a Parallel Pool-
ing Mixer (PPX) for harvesting discriminative cues.
• We investigate AMSS by fusing up to a total of 80
modalities and notice that CMNeXt achieves state-of-
the-art performances on six datasets.
|
Yao_Explicit_Boundary_Guided_Semi-Push-Pull_Contrastive_Learning_for_Supervised_Anomaly_Detection_CVPR_2023 | Abstract
Most anomaly detection (AD) models are learned using
only normal samples in an unsupervised way, which mayresult in ambiguous decision boundary and insufficient dis-
criminability. In fact, a few anomaly samples are often
available in real-world applications, the valuable knowl-edge of known anomalies should also be effectively ex-
ploited. However , utilizing a few known anomalies dur-ing training may cause another issue that the model maybe biased by those known anomalies and fail to generalize
to unseen anomalies. In this paper , we tackle supervised
anomaly detection, i.e., we learn AD models using a few
available anomalies with the objective to detect both theseen and unseen anomalies. We propose a novel explicitboundary guided semi-push-pull contrastive learning mech-anism, which can enhance model’s discriminability whilemitigating the bias issue. Our approach is based on twocore designs: First, we find an explicit and compact sepa-
rating boundary as the guidance for further feature learn-
ing. As the boundary only relies on the normal feature dis-tribution, the bias problem caused by a few known anoma-lies can be alleviated. Second, a boundary guided semi-push-pull loss is developed to only pull the normal fea-tures together while pushing the abnormal features apart
from the separating boundary beyond a certain margin re-gion. In this way, our model can form a more explicit and
discriminative decision boundary to distinguish known andalso unseen anomalies from normal samples more effec-
tively. Code will be available at http s:// github.
com/xcyao00/BGAD .
| 1. Introduction
Anomaly detection (AD) has received widespread atten-
tion in diverse domains, such as industrial defect inspec-
*Corresponding Author.tion [ 4,8,10,50] and medical lesion detection [ 12,38]. Most
previous anomaly detection methods [ 1,3,5,6,8,10,15,30,
33,47,50–52] are unsupervised and pay much attention to
normal samples while inadvertently overlooking anomalies,because it is difficult to collect sufficient and all kinds of
anomalies. However, learning only from normal samples
may limit the discriminability of the AD models [ 12,24].
As illustrated in Figure 1(a), without anomalies, the de-
cision boundaries are generally implicit and not discrimi-
native enough. The insufficient discriminability issue is a
common issue in unsupervised anomaly detection due to thelack of knowledge about anomalies. In fact, a few anoma-lies are usually available in real-world applications, whichcan be exploited effectively to address or alleviate this issue.
Recently, methods that can be called semi-supervised
AD [ 27,34,36] or AD with outlier exposure [ 16,17] begin to
focus on those available anomalies. These methods attemptto learn knowledge from anomalies by one-class classifica-tion with anomalies as negative samples [ 34,36]o rb ys u -
pervised binary classification [ 16,17] or by utilizing the de-
viation loss to optimize one anomaly scoring network [ 27].
They show a fact that the detection performance can be im-
proved significantly even with a few anomalies. However,
the known anomalies can’t represent all kinds of anomalies.These methods may be biased by the known anomalies and
fail to generalize to unseen anomalies (see Figure 5).
Therefore, to address the two above issues, we tackle
supervised anomaly detection [ 12], in which a few known
anomalies can be effectively exploited to train discrimina-
tive AD models with the objective to improve detection per-formance on the known anomalies and generalize well to
unseen anomalies. Compared with unsupervised AD, su-
pervised AD is more meaningful for real-world AD appli-
cations, because the detected anomalies can be used to fur-ther improve the discriminability and generalizability of themodel. To this end, we propose a novel Boundary Guided
Anomaly Detection (
BGAD ) model, which has two core
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
24490
/RJOLNHOLKRRG,PSOLFLW
%RXQGDULHV
'HQVLW\'HQVLW\'HQVLW\
$EQRUPDO
%RXQGDU\1RUPDO
%RXQGDU\)LQGLQJ([SOLFLW6HSDUDWLQJ%RXQGDU\
([SOLFLW
%RXQGDULHV
$PELJXRXV /RJOLNHOLKRRG /RJOLNHOLKRRG 8QDPELJXRXV
D E F([SOLFLW%RXQGDU\*HQHUDWLQJ3KDVH %RXQGDU\*XLGHG2SWLPL]LQJ3KDVH
Figure 1. Conceptual illustration of our method. (a) In most unsupervised AD models, the anomaly score distribution usually has ambigu-
ous regions, which makes it difficult to get one ideal decision boundary. E.g. , the left boundary will cause many false negatives, while
the right boundary may induce many false positives. (b) With the normalized normal feature distribution, a pair of explicit and compact
(close to the normal distribution) boundaries can be obtained easily. (c) With the proposed BG-SPP loss, boundary guided optimizing canbe implemented to obtain an unambiguous anomaly score distribution: a significant gap between the normal and abnormal distributions.
designs as illustrated in Figure 1: explicit boundary gener-
ating and boundary guided optimizing.
•Explicit Boundary Generating. We first employ nor-
malizing flow [ 14] to learn a normalized normal feature dis-
tribution, and obtain an explicit separating boundary, whichis close to the normal feature distribution edge and con-
trolled by a hyperparameter β(i.e., the normal boundary
in Figure 1(b)). The obtained explicit separating boundary
only relies on the normal distribution and has no relation
with the abnormal samples, thus the bias problem causedby the insufficient known anomalies can be mitigated.
•Boundary Guided Optimizing. After obtaining the
explicit separating boundary, we then propose a boundaryguided semi-push-pull (BG-SPP) loss to exploit anomalies
for learning more discriminative features. With the BG-SPP loss, only the normal features whose log-likelihoods
are smaller than the boundary are pulled together to forma more compact normal feature distribution (semi-pull);
while the abnormal features whose log-likelihoods arelarger than the boundary are pushed apart from the bound-
ary beyond a certain margin region (semi-push).
In this way, our model can form a more explicit and dis-
criminative separating boundary and also a reliable margin
region for distinguishing anomalies more effectively (see
Figure 1(c), 6). Furthermore, rarity is a critical problem
of anomalies and may cause feature learning inefficient. We
thus propose RandAugment-based Pseudo Anomaly Gener-
ation, which can simulate anomalies by creating local irreg-ularities in normal samples, to tackle the rarity challenge.
In summary, we make the following main contributions:1. We propose a novel Explicit Boundary Guided super-
vised AD modeling method, in which both normal and ab-normal samples are exploited effectively by well-designedexplicit boundary generating and boundary guided optimiz-ing. With the proposed AD method, higher discriminabilityand lower bias risk can be achieved simultaneously.
2. To exploit a few known anomalies effectively, we pro-
pose a BG-SPP loss to pull together normal features while
pushing abnormal features apart from the separating bound-ary, thus more discriminative features can be learned.
3. We achieve SOTA results on the widely-used
MVTecAD benchmark, with the performance of 99.3%
image-level AUROC and 99.2% pixel-level AUROC.
|
Yang_Progressive_Open_Space_Expansion_for_Open-Set_Model_Attribution_CVPR_2023 | Abstract
Despite the remarkable progress in generative technol-
ogy, the Janus-faced issues of intellectual property protec-
tion and malicious content supervision have arisen. Efforts
have been paid to manage synthetic images by attributing
them to a set of potential source models. However, the
closed-set classification setting limits the application in
real-world scenarios for handling contents generated by
arbitrary models. In this study, we focus on a challenging
task, namely Open-Set Model Attribution (OSMA), to simul-
taneously attribute images to known models and identify
those from unknown ones. Compared to existing open-
set recognition (OSR) tasks focusing on semantic novelty,
OSMA is more challenging as the distinction between
images from known and unknown models may only lie in
visually imperceptible traces. To this end, we propose a
Progressive O pen S pace E xpansion (POSE) solution, which
simulates open-set samples that maintain the same se-
mantics as closed-set samples but embedded with different
imperceptible traces. Guided by a diversity constraint, the
open space is simulated progressively by a set of lightweight
augmentation models. We consider three real-world sce-
narios and construct an OSMA benchmark dataset, includ-
ing unknown models trained with different random seeds,
architectures, and datasets from known ones. Extensive
experiments on the dataset demonstrate POSE is superior
to both existing model attribution methods and off-the-shelf
OSR methods. Github: https://github.com/ICTMCG/POSE
| 1. Introduction
Advanced generative modeling technology can create
extremely realistic visual content, leading to dramatic
changes in the field of AI-enhanced design, arts, and meta-
universe [23, 41, 47]. Whereas, the broadcasting of mali-
cious content generated by open source generation models
*Corresponding author
Figure 1. Open-set model attribution problem: The unknown
classes include unknown models different from known models
in training seeds, architectures, or training datasets. The goal is
to simultaneously attribute images to known models and identify
those from unknown ones.
has brought severe social impacts [9, 18, 43]. Furthermore,
new challenges have arisen for the ownership protection
of copyrighted digital generative models. To solve these
problems, model attribution, i.e., identifying the source
model of generated contents, has drawn increasing attention
recently [5, 33, 48, 50, 53].
Marra et al . [33] are among the first to point out that
GAN models leave specific fingerprints in the generated
images, just like camera devices. Further researches [5, 14,
48, 50, 53] validate the existence of GAN fingerprints and
show the feasibility of attributing fake images to a fixed
and finite set of known models. However, most of these
works focus on finding discriminative fingerprints among
the contents generated by different GAN models following
a simple closed-set setup. The ever-growing number of
unseen source models in the real-world scenario appeal
for a more generic approach. In this paper, we focus on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15856
the problem of Open-Set Model Attribution (OSMA), i.e.,
simultaneously attributing images to known source models
and identifying those from unknown ones.
An intuitive way to solve OSMA is to apply off-the-shelf
open-set recognition (OSR) approaches to closed-set model
attribution classifiers. Traditional OSR methods leverage
the output logits to either reject or categorize the input
images [2, 44]. However, following the discriminative line,
the performance highly depends on the closed-set classi-
fier [46]. The learned feature space is not rectified for open-
set samples. Another mainstream of OSR methods is based
on simulating open-set samples or features [7,27,35,36,58].
By the simulation of open space, the learned feature space
is more compact for closed-set categories [58], leading the
detection of unknown samples more valid. Nevertheless,
existing works only leverage a single generator [7, 27, 35]
or mechanism [58] for simulating open-set samples or
features, which are not diverse enough to reduce the open
space risk for OSMA. A generator could produce open-set
samples of different semantics, but its fingerprint is fixed
and thus not suitable for the expansion of open space.
In this study, we propose Progressive Open Space Ex-
pansion (POSE) tailored for open-set model attribution,
which simulates the potential open space of challenging
unknown models through involving a set of augmentation
models progressively. For augmentation model construc-
tion, it can be daunting to consider all types of unknown
models with a variety of architectures. Instead, lightweight
networks composed of a few convolution layers are em-
ployed. They serve as “virtual” follow-up blocks of known
models, augmenting closed-set samples to open-set samples
surrounding them by modifying their fingerprints with re-
construction residuals. Despite the simple structure, these
augmentation models show the potential to model traces of
a variety of unknown models. To enrich the simulated open
space, multiple augmentation models are involved. Instead
of training them independently, we design a progressive
training mechanism to ensure the diversity of simulated
open space across models in a computation-effective way.
To validate the effectiveness of POSE in the open world,
we construct a benchmark dataset considering three chal-
lenging unknown scenarios as shown in Figure 1, which
includes unknown models trained with either a different
random seed, architecture or dataset from known mod-
els. Extensive experiments on the benchmark demonstrate
POSE is superior to both existing GAN attribution methods
and OSR methods. In summary, our contributions are:
•We tackle an important challenge for applying model
attribution to open scenarios, the open-set model attribution
problem, which attributes images to known models and
identifies images from unknown ones.
•We propose a novel solution named POSE, which sim-
ulates the potential open space of unknown models pro-gressively by a set of lightweight augmentation models, and
consequently reduces open space risk.
•We construct an OSMA benchmark simulating the real-
world scenarios, on which extensive experiments prove
the superiority of POSE compared with existing GAN
attribution methods and off-the-shelf OSR methods.
|
Yu_On_the_Difficulty_of_Unpaired_Infrared-to-Visible_Video_Translation_Fine-Grained_Content-Rich_CVPR_2023 | Abstract
Explicit visible videos can provide sufficient visual in-
formation and facilitate vision applications. Unfortunately,
the image sensors of visible cameras are sensitive to light
conditions like darkness or overexposure. To make up for
this, recently, infrared sensors capable of stable imaging
have received increasing attention in autonomous driving
and monitoring. However, most prosperous vision mod-
els are still trained on massive clear visible data, facing
huge visual gaps when deploying to infrared imaging sce-
narios. In such cases, transferring the infrared video to a
distinct visible one with fine-grained semantic patterns is
a worthwhile endeavor. Previous works improve the out-
puts by equally optimizing each patch on the translated vis-
ible results, which is unfair for enhancing the details on
content-rich patches due to the long-tail effect of pixel dis-
tribution. Here we propose a novel CPTrans framework
to tackle the challenge via balancing gradients of different
patches, achieving the fine-grained Content-rich Patches
Transferring. Specifically, the content-aware optimization
module encourages model optimization along gradients of
target patches, ensuring the improvement of visual details.
Additionally, the content-aware temporal normalization
module enforces the generator to be robust to the motions of
target patches. Moreover, we extend the existing dataset In-
fraredCity to more challenging adverse weather conditions
(rain and snow), dubbed as InfraredCity-Adverse1. Exten-
sive experiments show that the proposed CPTrans achieves
state-of-the-art performance under diverse scenes while re-
quiring less training time than competitive methods.
| 1. Introduction
Visible light cameras have broad applicability in com-
puter vision algorithms for the sufficient visual informa-
Corresponding author
1The code and dataset are available at https://github.com/BIT-DA/I2V-
Processing
ROMA OursOutput GradCAM++
(a) (b)
Figure 1. (a) Visualization of pixel category distribution on dataset
IRVI [25] and semantic examples in random selected frames. We
conduct semantic segmentation via a pre-trained SegFormer [47]
on all visible video frames of IRVI and predict all pixels accord-
ing to the predefined categories in ADE20K [52]. (b) Outputs and
GradCAM++ results of different methods. ROMA pays equal at-
tention on the whole output, and the long-tail effect of training
data leads to the generation optimization along prejudiced gradi-
ents caused by the large proportion of pixels (e.g., sky and road).
We can generate more vivid details for content-rich patches (e.g.,
cars and road signs) than other methods.
tion (e.g., structure, texture, and color) of their captured
results. Most state-of-the-art vision algorithms have been
observed to show admirable performance under clear visi-
bility conditions [10, 16, 50]. Unfortunately, in most cases,
the real-world weather is unpredictable and diverse, leading
to complex and variable light conditions like overexposure
on snowing days. While image sensors of visible cameras
are sensitive to light conditions, their imaging results are
ambiguous in adverse weather. Under such circumstances,
people take infrared sensors to make up for the deficien-
cies of visible cameras. These infrared sensors can capture
stable structural information in diverse environments due to
the thermal imaging principle. In emergency avoidance or
hazard detection, they could be applied in autonomous driv-
ing and monitoring scenarios [28,30]. However, most com-
puter vision models are trained under visible data. Although
infrared videos outline surrounding objects all the time, the
existing huge gaps and semantic lacking problems hinder
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1631
the applications in infrared imaging scenarios. Therefore,
it is worth translating stable and accessible infrared videos
into clear visible ones. The translated visible results may
provide visual information for supporting visual applica-
tions like object detection and semantic segmentation.
To tackle the unpaired infrared-to-visible translation
challenge, previous methods [12, 13, 33, 37] mainly fo-
cus on learning color mapping functions with complex
manual coloring. The high costs and inevitable human
bias limit the application of such approaches. Inspired
by GANs [11], unpaired image translation methods have
emerged. For instance, cycle-based methods [17,21,22,48]
preserve content during the translation via the cycle con-
sistency [19]. Furthermore, one-sided methods [20, 31, 51]
maintain the content through hand-designed feature-level
restraints. However, substantial visual gaps between in-
frared and visible data lead to difficulties in generating
fine-grained visible results. Additionally, continuous in-
frared video signals are more challenging to transfer be-
cause of the need to ensure temporal consistency. Thus,
taking long-term information into account, [3, 7, 25] pro-
pose their temporal consistency losses to refine frameworks
based on unpaired image translation methods. Besides,
I2V-GAN [25] and ROMA [49] are tailored approaches
for unpaired infrared-to-visible video translation. Espe-
cially, ROMA has achieved state-of-art performance, il-
lustrating the importance of retaining structural informa-
tion and proposing cross-similarity consistency for struc-
ture. Despite its success, experiments indicate that cross-
similarity still faces challenges in accurately transferring
fine-grained (i.e., realistic and delicate) details, especially
for the content-rich patches.
In fact, most GAN-based methods utilize the PatchGAN
discriminator [19] for style optimization. Similar to the
classification task, the discriminator outputs w×hpredic-
tions (True or False) for corresponding patches. To ana-
lyze the optimizing behavior of discriminators in the train-
ing process, we visualize the gradients via GradCAM++ [6]
and pixel category distribution as shown in Fig. 1. Grad-
CAM++ utilizes the gradients of the classification score to
identify the parts of interest. The left part (a) shows that
a few majority categories occupy most of the pixels while
most minority categories contain a limited number of pix-
els. Additionally, content-rich patches (including rich vi-
sual details like patches of cars) are mostly the minority
categories, while those content-lacking patches (including
lacking visual details like patches of the sky) are mostly
the majority. Upon exposure to new data, gradient-based
optimization methods, without any constraint, change the
learned encoding to minimize the objective function with
global respect [36]. Thus, equal optimization for each patch
(GradCAM++ of ROMA on Fig. 1 (b)) faces prejudiced gra-
dients to content-lacking patches (i.e., major pixels) whenapplied to the generation. Moreover, it will lead to the in-
ability of discriminators to improve the qualities of content-
rich patches. An approach is needed to break the prejudice
on optimization caused by the usually exhibiting long-tail
distribution in real-world training data [24, 45, 54].
In this paper, we start with the analysis of difficulty
for fine-grained Content-rich Patches Trans fer on unpaired
infrared-to-visible video translation and propose the CP-
Trans framework To improve the results of content-rich
patches, we introduce two novel modules: Content-aware
Optimization (CO), balancing the gradients of patches for
improving generated content-rich patches, and Content-
aware Temporal Normalization (CTN), which enforces the
generator to be robust to the motion of content. Be-
sides, we extend the InfraredCity dataset to adverse weather
conditions (i.e., raining and snowing scenes), noted as
InfraredCity-Adverse , for promoting infrared-related re-
search. Our extensive evaluations of diverse datasets show
that our approach improves upon the previous ROMA
method, setting new state-of-the-art performances on un-
paired infrared-to-visible video translation. Remarkably,
further applications validate our task’s value and confirm
our approach’s admirable performance. Contributions are:
• We focus on the difficulty of fine-grained unpaired
infrared-to-visible video translation and point out the ex-
isting problem that models are optimized along preju-
diced gradients due to the long-tail effect.
• We propose a novel CPTrans framework consisting of
content-aware optimization and temporal normalization,
which benefits the generation of content-rich patches.
• We extend the InfraredCity to more challenging ad-
verse weather conditions (rain and snow), noted as
InfraredCity-Adverse for infrared-related study and val-
idate the remarkable success of CPTrans through suffi-
cient experiments (including further applications, i.e., ob-
ject detection, video fusion, and semantic segmentation).
|
Yin_AGAIN_Adversarial_Training_With_Attribution_Span_Enlargement_and_Hybrid_Feature_CVPR_2023 | Abstract
The deep neural networks (DNNs) trained by adversarial
training (AT) usually suffered from significant robust gener-
alization gap, i.e., DNNs achieve high training robustness
but low test robustness. In this paper, we propose a generic
method to boost the robust generalization of AT methods
from the novel perspective of attribution span. To this end,
compared with standard DNNs, we discover that the gen-
eralization gap of adversarially trained DNNs is caused by
the smaller attribution span on the input image. In other
words, adversarially trained DNNs tend to focus on specific
visual concepts on training images, causing its limitation
on test robustness. In this way, to enhance the robustness,
we propose an effective method to enlarge the learned at-
tribution span. Besides, we use hybrid feature statistics for
feature fusion to enrich the diversity of features. Extensive
experiments show that our method can effectively improves
robustness of adversarially trained DNNs, outperforming
previous SOTA methods. Furthermore, we provide a the-
oretical analysis of our method to prove its effectiveness.
| 1. Introduction
Deep neural networks (DNNs) have shown remarkable
success in solving complex prediction tasks. However, re-
cent studies have shown that they are particularly vulnera-
ble to adversarial attacks [22], which take the form of small
perturbations to the input that cause DNNs to predict in-
* Zhen Xiao and Kelu Yao are the corresponding authors.
(a) (b) (c) (d) (e)ASC: 0.585 ASC: 0.491
ASC: 0.562 ASC: 0.479
ASC: 0.583 ASC: 0.497Figure 1. A visual illustration of attribution span under ResNet-
18. (a) is the original image; (b) and (c) are attribution spans of
the standard model and robust model in the inference phase, re-
spectively. ASC is Attribution SpanCoverage; (d) is the differ-
ence between the standard model and the robust model in terms of
attribution span; (e) is the result after partial feature erasure of the
original image using (d).
correct outputs. The defense of adversarial examples has
been intensively studied in recent years and several defenses
against adversarial attacks have been proposed in a great
deal of work [17, 18].
Among the various existing defense strategies, adversar-
ial training (AT) [10, 15] has been shown to be one of the
most effective defenses [16] and has received a lot of atten-
tion from the research community. However, adversarially
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20544
trained DNNs typically show a significant robust general-
ization gap [27]. Intuitively, there is a large gap between the
training robustness and test robustness of the adversarially
trained model on the adversarial examples. Some existing
methods [23, 25, 27] narrow the robust generalization gap
from the perspective of weight loss landscapes. Other exist-
ing methods [13,24,32] enhance robust generalization from
the perspective of training strategies. However, this work
ignores a critical factor affecting generalization robustness,
which is the learned knowledgeable representation.
Training DNNs with robust generalization is particu-
larly difficult, typically possessing significantly higher sam-
ple complexity [8, 29, 31] and requiring more knowledge-
able [4, 19]. Compared with standard DNNs, we discover
that the generalization gap of adversarially trained DNNs
is caused by the smaller attribution span on the input im-
age. In other words, adversarially trained DNNs tend to
focus on specific visual concepts on training images [8],
causing its limitation on test robustness. Specifically, we
explore the difference between the standard model (training
w/o AT) and the robust model (training w/ AT) in the in-
ference phase through empirical experiments. As shown in
Figure 1 (b) and Figure 1 (c), the standard model and the
robust model have different attribution span for the same
image in the inference phase, and the attribution span of the
standard model is larger than that of the robust model in
general. Through our further exploration, we find that these
different spans (see Figure 1 (d)) affect the model’s deci-
sion on clean data and hardly affect the model’s decision
on adversarial examples. This indicates that AT enables the
model to learn robust features, but ignores the features of
generalization. This motivates us to design a method to en-
large the attribution span to ensure that the model focuses on
robust features while enhancing the focus on other features
to improve the generalization ability of the robust model.
To this end , we propose a generic method to boost the
robust generalization of AT from the novel perspective of
attribution span. Specifically, we use the class activation
mapping to obtain the attribution span of the model under
real and fake labels, and mix these two spans proportion-
ally to complete the enlargement of the attribution span and
make the model focus on the features within this span dur-
ing the training process. In addition, in order to increase
the diversity of features and ensure the stable training of the
model under the enlarged attribution span, we adopt the fea-
ture fusion implemented by hybrid feature statistics to fur-
ther improve the generalization ability of the model. Com-
pared to other methods, our method can further improve the
accuracy of the model on clean data and adversarial exam-
ples. Meanwhile, our work provides new insights into the
lack of good generalization of robust models.
Our main contributions are summarized as follows.
• We find that adversarially trained DNNs focus on asmaller span of features in the inference phase and
ignores some other spans of features. These spans
are generally associated with generalization ability and
have little impact on robustness.
• We propose a method to boost AT, called AGAIN,
which is short for Attribution Span Enlar Gement and
Hybrid Fe Ature Fus IoN. During model training, we
expand the region where the model focuses its features
while ensuring that it learns robust features, and com-
bine feature fusion to enhance the generalization of the
model over clean data and adversarial examples.
• Extensive experiments have shown that our proposed
method can better improve the accuracy of the model
on clean data and adversarial examples compared to
state-of-the-art AT methods. Particularly, it can be eas-
ily combined with other methods to further enhance
the effectiveness of the method.
|
Yoshida_Light_Source_Separation_and_Intrinsic_Image_Decomposition_Under_AC_Illumination_CVPR_2023 | Abstract
Artificial light sources are often powered by an elec-
tric grid, and then their intensities rapidly oscillate in re-
sponse to the grid’s alternating current (AC). Interestingly,
the flickers of scene radiance values due to AC illumina-
tion are useful for extracting rich information on a scene
of interest. In this paper, we show that the flickers due to
AC illumination is useful for intrinsic image decomposition
(IID). Our proposed method conducts the light source sepa-
ration (LSS) followed by the IID under AC illumination. In
particular, we reveal the ambiguity in the blind LSS via ma-
trix factorization and the ambiguity in the IID assuming the
diffuse reflection model, and then show why and how those
ambiguities can be resolved via a physics-based approach.
We experimentally confirmed that our method can recover
the colors of the light sources, the diffuse reflectance values,
and the diffuse and specular intensities (shadings) under
each of the light sources, and that the IID under AC illumi-
nation is effective for application to auto white balancing.
| 1. Introduction
Artificial light sources in our surroundings are often
powered by an electric grid, and therefore their intensities
rapidly oscillate in response to the grid’s alternating current
(AC). Such intensity oscillations cause flickers in the radi-
ance values of a scene illuminated by artificial light sources.
The flickers are usually too fast to notice with our naked
eyes, but can be captured by using cameras with short ex-
posure time settings [32]. It is known that the flickers could
make auto white balance unnatural [15].
Interestingly, the flickers are useful for extracting rich in-
formation on a scene of interest. Sheinin et al. [28] propose
a method for light source separation (LSS) under AC illu-
mination. Their method decomposes an image sequence of
a scene illuminated by multiple AC light sources into thebasis images of the scene, each of which is illuminated by
only one of the light sources, and the temporal intensity pro-
files of the light sources. They make use of their self-built
coded-exposure camera synchronized to AC and the dataset
of temporal intensity profiles of various light sources, and
then achieve LSS even for dark scenes such as a city-scale
scene at night.
In this paper, we show that the flickers due to AC illu-
mination is useful also for intrinsic image decomposition
(IID). Originally, IID recovers the shading and reflectance
images of a scene of interest from a single input image on
the basis of the Retinex theory [2, 19]. Those intrinsic prop-
erties of a scene is useful for computer vision applications
such as image segmentation [6], object recognition [25],
and shape from shading [14].
Our proposed method assumes a scene illuminated by
multiple AC light sources, and recovers the intrinsic prop-
erties of the scene and the light sources from an image se-
quence captured by using a consumer high speed camera.
In contrast to the conventional methods for IID, our method
assumes the dichromatic reflection model [27], and then re-
covers the intrinsic properties more than the reflectance and
shading images: the colors of the light sources, the diffuse
reflectance values, and the diffuse and specular intensities
(shadings) under each of the light sources.
Specifically, our proposed method conducts the blind
LSS via matrix factorization followed by the IID assuming
the dichromatic reflection model. In particular, we reveal
the ambiguity in the blind LSS under AC illumination via
matrix factorization [26], and then resolve the ambiguity by
integrating the LSS and the IID assuming the diffuse reflec-
tion model. Furthermore, we reveal the ambiguity in the
IID assuming the diffuse reflection model under AC illumi-
nation, and then resolve the ambiguity on the basis of the
dichromatic reflection model by taking specular highlights
into consideration1.
1It is analogous to uncalibrated photometric stereo; the GBR ambigu-
ity [3] can be resolved from specularity [8]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5735
To show the effectiveness of our proposed method, we
conducted a number of experiments using both synthetic
and real images. We confirmed that our method works well,
i.e.can resolve the ambiguities in the LSS and the IID on
real images as well as synthetic images. In addition, we
show that the IID under AC illumination is effective for ap-
plication to auto white balancing.
The main contributions of this study are threefold. First,
we tackle a novel problem of the IID under AC illumination.
We conduct the blind LSS via matrix factorization followed
by the IID assuming the dichromatic reflection model, and
show that the flickers due to AC illumination are useful not
only for LSS but also for IID. Second, we reveal the am-
biguity in the blind LSS via matrix factorization and the
ambiguity in the IID assuming the diffuse reflection model.
Then, we show why and how those ambiguities can be re-
solved via a physics-based approach. Third, we experimen-
tally confirmed that our method can recover the colors of the
light sources, the diffuse reflectance values, and the diffuse
and specular intensities (shadings) under each of the light
sources, and that the IID under AC illumination is effective
for application to auto white balancing.
|
Xu_Open-Vocabulary_Panoptic_Segmentation_With_Text-to-Image_Diffusion_Models_CVPR_2023 | Abstract
We present ODISE: Open-vocabulary DIffusion-based
panoptic SEgmentation, which unifies pre-trained text-
image diffusion and discriminative models to perform open-
vocabulary panoptic segmentation. Text-to-image diffu-
sion models have the remarkable ability to generate high-
quality images with diverse open-vocabulary language de-
scriptions. This demonstrates that their internal represen-
tation space is highly correlated with open concepts in the
real world. Text-image discriminative models like CLIP , on
the other hand, are good at classifying images into open-
vocabulary labels. We leverage the frozen internal repre-
sentations of both these models to perform panoptic seg-
mentation of any category in the wild. Our approach out-
performs the previous state of the art by significant margins
on both open-vocabulary panoptic and semantic segmen-
tation tasks. In particular, with COCO training only, our
method achieves 23.4 PQ and 30.0 mIoU on the ADE20K
dataset, with 8.3 PQ and 7.9 mIoU absolute improvement
over the previous state of the art. We open-source our
code and models at https://github.com/NVlabs/
ODISE .
*Jiarui Xu was an intern at NVIDIA during the project. †equal contri-
bution. | 1. Introduction
Humans look at the world and can recognize limitless
categories. Given the scene presented in Fig. 1, besides
identifying every vehicle as a “truck”, we immediately un-
derstand that one of them is a pickup truck requiring a trailer
to move another truck. To reproduce an intelligence with
such a fine-grained and unbounded understanding, the prob-
lem of open-vocabulary recognition [36, 57, 76, 89] has re-
cently attracted a lot of attention in computer vision. How-
ever, very few works are able to provide a unified frame-
work that parses all object instances and scene semantics at
the same time, i.e., panoptic segmentation.
Most current approaches for open-vocabulary recogni-
tion rely on the excellent generalization ability of text-
image discriminative models [30, 57] trained with Internet-
scale data. While such pre-trained models are good at clas-
sifying individual object proposals or pixels, they are not
necessarily optimal for performing scene-level structural
understanding. Indeed, it has been shown that CLIP [57]
often confuses the spatial relations between objects [69].
We hypothesize that the lack of spatial and relational under-
standing in text-image discriminative models is a bottleneck
for open-vocabulary panoptic segmentation.
On the other hand, text-to-image generation using dif-
fusion models trained on Internet-scale data [1, 59, 61, 62,
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2955
90] has recently revolutionized the field of image synthe-
sis. It offers unprecedented image quality, generalizabil-
ity, composition-ability and, semantic control via the input
text. An interesting observation is that to condition the im-
age generation process on the provided text, diffusion mod-
els compute cross-attention between the text’s embedding
and their internal visual representation. This design im-
plies the plausibility of the internal representation of dif-
fusion models being well-differentiated and correlated to
high/mid-level semantic concepts that can be described by
language. As a proof-of-concept, in Fig.1 (center), we vi-
sualize the results of clustering a diffusion model’s internal
features for the image on the left. While not perfect, the
discovered groups are indeed semantically distinct and lo-
calized. Motivated by this finding, we ask the question of
whether Internet-scale text-to-image diffusion models can
be exploited to create universal open-vocabulary panoptic
segmentation learner for any concept in the wild?
To this end, we propose ODISE : Open-vocabulary
DIffusion-based panoptic SEgmentation (pronounced o-di-
see), a model that leverages both large-scale text-image dif-
fusion and discriminative models to perform state-of-the-
art panoptic segmentation of any category in the wild. An
overview of our approach is illustrated in Fig. 2. At a
high-level it contains a pre-trained frozen text-to-image dif-
fusion model into which we input an image and its cap-
tion and extract the diffusion model’s internal features for
them. With these features as input, our mask generator pro-
duces panoptic masks of all possible concepts in the image.
We train the mask generator with annotated masks avail-
able from a training set. A mask classification module then
categorizes each mask into one of many open-vocabulary
categories by associating each predicted mask’s diffusion
features with text embeddings of several object category
names. We train this classification module with either mask
category labels or image-level captions from the training
dataset. Once trained, we perform open-vocabulary panop-
tic inference with both the text-image diffusion and discrim-
inative models to classify a predicted mask. On many differ-
ent benchmark datasets and across several open-vocabulary
recognition tasks, ODISE achieves state-of-the-art accuracy
outperforming the existing baselines by large margins.
Our contributions are the following:
• To the best of our knowledge, ODISE is the first work
to explore large-scale text-to-image diffusion models
for open-vocabulary segmentation tasks.
• We propose a novel pipeline to effectively leverage
both text-image diffusion and discriminative models to
perform open-vocabulary panoptic segmentation.
• We significantly advance the field forward by out-
performing all existing baselines on many open-
vocabulary recognition tasks, and thus establish a new
state of the art in this space. |
Zhang_A_Loopback_Network_for_Explainable_Microvascular_Invasion_Classification_CVPR_2023 | Abstract
Microvascular invasion (MVI) is a critical factor for prog-
nosis evaluation and cancer treatment. The current diagnosis
of MVI relies on pathologists to manually find out cancer-
ous cells from hundreds of blood vessels, which is time-
consuming, tedious, and subjective. Recently, deep learning
has achieved promising results in medical image analysis
tasks. However, the unexplainability of black box models and
the requirement of massive annotated samples limit the clini-
cal application of deep learning based diagnostic methods.
In this paper, aiming to develop an accurate, objective,
and explainable diagnosis tool for MVI, we propose a Loop-
back Network (LoopNet) for classifying MVI efficiently. With
the image-level category annotations of the collected Patho-
logic Vessel Image Dataset (PVID), LoopNet is devised to
be composed binary classification branch and cell locating
branch. The latter is devised to locate the area of cancer-
ous cells, regular non-cancerous cells, and background. For
healthy samples, the pseudo masks of cells supervise the
cell locating branch to distinguish the area of regular non-
cancerous cells and background. For each MVI sample, the
cell locating branch predicts the mask of cancerous cells.
Then the masked cancerous and non-cancerous areas of
the same sample are input back to the binary classification
branch separately. The loopback between two branches en-
ables the category label to supervise the cell locating branch
to learn the locating ability for cancerous areas. Experiment
results show that the proposed LoopNet achieves 97.5%ac-
curacy on MVI classification. Surprisingly, the proposed
loopback mechanism not only enables LoopNet to predict
the cancerous area but also facilitates the classification back-
bone to achieve better classification performance.
*Corresponding author
vessels healthy/MVI vessels
117140×273140 px
(a)
(b) (c)
healthy/cancerous cellsFigure 1. Examples of MVI and healthy vessels extracted from
a pathological image of liver cancer. (a) The super large sample
contains numerous blood vessels of varied sizes. (b) The healthy
vessels are composed of a variety of cells with similar appearances.
(c) The cancerous cells have varied types and similar appearances
to parts of healthy cells.
| 1. Introduction
Microvascular invasion (MVI), referring to the appear-
ance of cancerous cells within microscopic venules or veins,
is a histological feature of cancer-related to aggressive
biological behavior [27, 56]. In clinical, MVI is usually
used as a reference standard for assessing cancer spread-
ing, which is a critical factor for prognosis evaluation and
treatment [8, 15, 43]. Accurate prognosis evaluation along
with appropriate treatment can effectively improve patient’s
life quality and prolong their life-span.
Currently, the diagnosis of MVI relies on pathologists to
manually find out cancerous cells from hundreds of blood
vessels, each of which usually contains dozens of cells. As
shown in Fig.1, each pathological sample is an image of
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7443
about 100,000×250,000px. These super-large pathological
images have three characteristics. Firstly, each sample con-
tains numerous blood vessels (Fig.1a). Secondly, each blood
vessel usually has a variety of cells with similar appearances
(Fig.1b). Thirdly, types of cancerous cells are also varied
(Fig.1c). Therefore, diagnosis of MVI requires the profes-
sional pathologist to discriminate cancerous/non-cancerous
cells carefully, which is time-consuming and tedious. The
discrimination relies on the individual pathologist’s prior
knowledge, which is subjective and leads to misdiagnosis
occasionally.
In recent years, deep learning has achieved promising
results in many areas [28 –30, 68 –72], including medical
image analysis. Many researchers focus on applying deep
learning techniques to image-based tumor analysis tasks,
such as tumor grading [3, 73], lesion area detection [14, 35],
vessel segmentation [18,31], cell detection/segmentation [42,
54, 67, 75], etc. The successful application of deep learning
relies on massive annotated samples. However, annotating
cancerous cells of all MVI images is very time-consuming.
What’s more, the black-box characteristic of deep learning
leads to unexplainable classification results, which limits
the clinical application of deep learning based diagnostic
methods.
In order to apply the deep learning technique to the MVI
analysis task, we collect the first Pathologic Vessel Image
Dataset (PVID) containing healthy blood vessel samples and
MVI samples from the pathological image of liver cancer
patients.
In this paper, we aim to develop an accurate, objective,
and explainable method for MVI diagnosis with as few anno-
tations as possible. As annotating the cell in each MVI vessel
is time-consuming, we only adopt easily-obtained image-
level category labels for developing the new approach.
For the explainable MVI classification, the developed ap-
proach should provide credible evidence, such as cancerous
areas and classification results. Therefore, the proposed ap-
proach is devised to be composed of two branches: the binary
classification branch and the cell locating branch. The binary
classification branch is used to classify the healthy blood ves-
sels and MVI vessels with corresponding vessel image-level
category labels as supervision. The initial goal of the cell
locating branch is to distinguish the cancerous cells. How-
ever, the supervision information for the cell locating branch
is insufficient, which requires exploring more supervision
information from the characteristic of MVI itself.
Firstly, based on the characteristic of blood vessel sam-
ples that most cells can be distributed into some similar
templates according to structure and color, the correlation
filter [9, 22], which is widely adopted in the object tracking
area, can be used for locating most of the cells; hence the
results of this filter can be interpreted as pseudo masks of
cells for supervising the cell locating branch to distinguishcell area from the background. Secondly, the healthy vessel
sample only containing non-cancerous cells and background
is used for supervising the cell locating branch distinguishing
healthy area (non-cancerous cells and background) from the
cancerous cells. Lastly, we devise loopback strategy between
the binary classification branch and cell locating branch to
discover the cancerous area from each MVI sample.
For the loopback strategy, the cell locating branch first
predicts the cancerous area of the MVI sample, then the can-
cerous and non-cancerous areas of the same sample masked
with the predicted results are input back into the classifi-
cation branch separately. The devised a loopback strategy
effectively achieves two goals: 1) utilizing the image-level
category label to supervise the cell locating branch distin-
guishing the cancerous area from other areas. 2) building the
direct relation between the predicted cancerous areas and the
final classification result.
Experiment results show that the loopback strategy not
only enables the proposed framework to predict precious
cancerous areas but also facilitate the classification branch
achieve better classification performance. The two-branch
framework with the loopback strategy, termed as Loopback
Network (LoopNet), achieves 97.5%accuracy on MVI clas-
sification.
In conclusion, the main contributions of our work are
summarized as follows:
•We propose the first deep learning based network,
termed as LoopNet, for explainable MVI classifica-
tion. LoopNet fully exploits the characteristics of MVI
samples to achieve blood vessel classification and cell
locating results simultaneously and can be extended to
MVI analysis tasks on various organs.
•The loopback strategy is devised for utilizing the cat-
egory label to supervise LoopNet distinguishing the
cancerous area from other regions, which effectively
builds the direct relation between the located cancerous
area and the final classification result.
•We collect the first Pathologic Vessel Image Dataset
(PVID) containing 4130 healthy blood vessel samples
and857MVI samples from the pathological image of
103liver cancer patients.
•Experiment show that LoopNet achieves 97.5%accu-
racy on PVID, which verifies the potential of deep learn-
ing on MVI classification task.
|
Yang_VectorFloorSeg_Two-Stream_Graph_Attention_Network_for_Vectorized_Roughcast_Floorplan_Segmentation_CVPR_2023 | Abstract
Vector graphics (VG) are ubiquitous in industrial de-
signs. In this paper, we address semantic segmentation
of a typical VG, i.e., roughcast floorplans with bare wall
structures, whose output can be directly used for further
applications like interior furnishing and room space mod-
eling. Previous semantic segmentation works mostly pro-
cess well-decorated floorplans in raster images and usu-
ally yield aliased boundaries and outlier fragments in seg-
mented rooms, due to pixel-level segmentation that ignores
the regular elements (e.g. line segments) in vector floor-
plans. To overcome these issues, we propose to fully uti-
lize the regular elements in vector floorplans for more in-
tegral segmentation. Our pipeline predicts room segmen-
tation from vector floorplans by dually classifying line seg-
ments as room boundaries, and regions partitioned by line
segments as room segments. To fully exploit the structural
relationships between lines and regions, we use two-stream
graph neural networks to process the line segments and par-
titioned regions respectively, and devise a novel modulated
graph attention layer to fuse the heterogeneous information
from one stream to the other. Extensive experiments show
that by directly operating on vector floorplans, we outper-
form image-based methods in both mIoU and mAcc. In ad-
dition, we propose a new metric that captures room integrity
and boundary regularity, which confirms that our method
produces much more regular segmentations. Source code is
available at https://github.com/DrZiji/VecFloorSeg.
| 1. Introduction
Vector graphics are widely used in industrial designs, in-
cluding graphic designs [26], 2D interfaces [5] and floor-
plans [15]. In particular, 2D floorplans consisting of ge-
ometric primitives (e.g., lines and curves) are the de-facto
§Haiyong is the Project Lead.
∗Corresponding Author.
(b)Furnished Counterpart
(a)Input
(d)Label (e)Ours (f)Image Based
Segmentation(b)Furnished Counterpart (c)3D Rendering
Figure 1. Comparing the results of our vector graphics based
method (e) and raster image-based method [39] (f). Our result has
straight boundaries and consistent region labels, compared with
image-based result where red squares highlight semantic confu-
sion and the green square underscores missing room prediction.
data representation for interior designs, indoor construction
and property development. In contrast to raster images with
fixed resolutions, vector graphics can be arbitrarily scaled
without artifacts such as blurring and aliasing details. On
the other hand, due to the irregularly structured data, it is
difficult to apply image-based backbone networks directly
to vector graphics for various applications.
Semantic segmentation of roughcast floorplans into
rooms with labeled types (e.g. bedroom, kitchen, etc.) is a
fundamental task for downstream applications. Interior de-
signers usually first draw the roughcast floorplan, including
basic elements like wall blocks and pipe barrels for property
development (Fig. 1(a)) [29, 34]. Afterwards, interior fur-
nishing, furniture layout, and 3D room spaces can be con-
structed and customized (Fig. 1(b)&(c)) [33]. During this
procedure, it is important to obtain semantic segmentation
of room spaces to cater to above needs. While recogniz-
ing room layouts from wall structures is straightforward for
humans, automatic recognition with accurate semantics and
clean boundaries is challenging.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1358
Recent works [19, 22, 23, 39] use powerful image-based
segmentation networks on rasterized floorplans to predict
room segmentation in a pixel-wise manner. Due to the
pixel-wise processing that ignores the integrity of structural
elements, their results tend to have jigsaw boundaries and
fragmented semantic regions as shown in Fig. 1(f). Besides,
these methods usually rely on texts and furniture to deter-
mine the semantic labels, which are not available in rough-
cast floorplans. Another line of prior works processes vec-
tor graphics for recognition, e.g., object detection [14, 28]
and symbol spotting [9,10,41]. However, to our best knowl-
edge, semantic segmentation of vector graphics, roughcast
floorplans in particular, has not been investigated before.
In this work, we make a first attempt at semantic seg-
mentation of 2D roughcast floorplans directly as vector
graphics. On one hand, by working with vector floor-
plans directly, the segmentation output is naturally regular
and compact vector graphics rather than dense pixels (cf.
Fig. 1(e)&(f)), which greatly facilitates downstream appli-
cations. On the other hand, the vector roughcast floorplans
pose challenges in the following aspects. First, rooms in
vector floorplans seldomly contain complete contour lines
formed by the input line segments (see Fig. 1(a)&(d)). Sec-
ond, the type of a room is determined not only by its shape
but also by the relative relationships with its neighboring
rooms and within the overall floorplan.
To address the above challenges, we make two obser-
vations. First, room spaces can be subdivided into a set
of polygonal regions by input lines together with their ex-
tensions (Fig. 2), and their semantic classification as room
types defines room segmentation. Second, lines (including
extended lines) are potential boundaries of different rooms,
and their being classified as boundaries or not should assist
room segmentation in a dual direction.
Based on the two observations, we design a two-stream
graph attention network (GAT) for the task. As illustrated
in Fig. 2, the primal stream takes as input the primal graph
that encodes line endpoints as vertices and line segments as
edges, and predicts the boundary classification of edges; the
dual stream takes as input the dual graph that encodes parti-
tioned regions as vertices and their adjacency as edges, and
predicts the vertex classification of regions, which effec-
tively defines the semantic segmentation of a vector floor-
plan. Furthermore, the two streams should enhance each
other rather than being separated. To facilitate data ex-
change between two streams, we present a novel modulated
GAT layer to fuse information from one stream into the
graph network computation of the other stream. We evalu-
ate our approach on two large-scale floorplan datasets; both
classical metrics and a new metric that we develop to focus
on integral segmentation show that our results improve pre-
vious image-based results significantly. To summarize, we
make the following contributions:•We approach semantic segmentation of vector rough-
cast floorplans through the dual aspects of boundary
line classification and region classification.
•We design two-stream graph neural networks to pro-
cess dual regions and primal lines respectively, and
devise a novel modulated GAT layer to exchange data
across streams.
•We propose a new metric to capture both accuracy and
integrity of the segmentation results.
•We obtain vector segmentation results on two floorplan
datasets, which show much more compact boundaries
and better integrity than raster image-based results.
|
Zhang_NICO_Towards_Better_Benchmarking_for_Domain_Generalization_CVPR_2023 | Abstract
Despite the remarkable performance that modern deep
neural networks have achieved on independent and iden-
tically distributed (I.I.D.) data, they can crash under dis-
tribution shifts. Most current evaluation methods for do-
main generalization (DG) adopt the leave-one-out strat-
egy as a compromise on the limited number of domains.
We propose a large-scale benchmark with extensive labeled
domains named NICO++along with more rational eval-
uation methods for comprehensively evaluating DG algo-
rithms. To evaluate DG datasets, we propose two metrics
to quantify covariate shift and concept shift, respectively.
Two novel generalization bounds from the perspective of
data construction are proposed to prove that limited con-
cept shift and significant covariate shift favor the evalua-
tion capability for generalization. Through extensive ex-
periments, NICO++shows its superior evaluation capabil-
ity compared with current DG datasets and its contribu-
tion in alleviating unfairness caused by the leak of oracle
knowledge in model selection. The data and code for the
benchmark based on NICO++are available at https:
//github.com/xxgege/NICO-plus .
| 1. Introduction
Machine learning has illustrated its excellent capability
in a wide range of areas [37, 65, 82]. Most current algo-
rithms minimize the empirical risk in training data relying
on the assumption that training and test data are indepen-
dent and identically distributed (I.I.D.). However, this ideal
hypothesis is hardly satisfied in real applications, especially
those high-stake applications such as healthcare [10, 49],
autonomous driving [1, 13, 39] and security systems [6],
owing to the limitation of data collection and intricacy of
the scenarios. Distribution shifts between training and test
data may lead to the unreliable performance of current
approaches in practice. Hence, instead of generalization
†Equal contribution
*Corresponding Author
0.1 0.3 0.5 0.7
Concept shift0.150.250.35Covariate shiftPACS
VLCS
DomainNet
Office-Home
iWildCam (WILDS)
FMoW (WILDS)
Meta-shift
NICO
NICO++Figure 1. Covariate shift ( Mcovin Equation (1)) and concept shift
(Mmax
cpt in Equation (2)) of NICO++and current DG datasets.
NICO++has the lowest concept shift and highest covariate shift,
showing the superiority in evaluation capability.
within the training distribution, the ability to generalize un-
der distribution shift, domain generalization (DG) [75, 94],
is of more critical significance in realistic scenarios.
In the field of computer vision, benchmarks that pro-
vide the common ground for competing approaches often
play a role of catalyzer promoting the advance of research
[14]. An advanced DG benchmark should provide sufficient
diversity in distributions for both training and evaluating
DG algorithms [74, 78] while ensuring essential common
knowledge of categories for inductive inference across do-
mains [33, 34, 93]. The first property drives generalization
challenging, and the second ensures the solvability [81].
This requires adequate distinct domains and instructive fea-
tures for each category shared among all domains.
Current DG benchmarks, however, either lack sufficient
domains (e.g., 4 domains in PACS [40], VLCS [18] and
Office-Home [73] and 6 in DomainNet [53]) or too simple
or limited to simulating significant distribution shifts in real
scenarios [2, 21, 30]. To enrich the diversity and perplexing
distribution shifts in training data as much as possible, most
of the current evaluation methods for DG adopt the leave-
one-out strategy, where one domain is considered as the test
domain and the others for training. This is not an ideal eval-
uation for generalization but a compromise due to the lim-
ited number of domains in current datasets, which impairs
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
16036
the evaluation capability. To address this issue, we suggest
testing DG methods on multiple test domains instead of
one specific domain in each evaluation after training .
To benchmark DG methods comprehensively and sim-
ulate real scenarios where a trained model may encounter
any possible test data while providing sufficient diversity
in the training data, we construct a large-scale DG dataset
named NICO++with extensive domains and two proto-
cols supported by aligned and flexible domains across cate-
gories, respectively, for better evaluation. Our dataset con-
sists of 80 categories, 10 aligned common domains for all
categories, 10 unique domains specifically for each cat-
egory, and more than 230,000 images. Abundant diver-
sity in both domain and category supports flexible assign-
ments for training and test, controllable degree of distribu-
tion shifts, and extensive evaluation on multiple target do-
mains. Images collected from real-world photos and consis-
tency within category concepts provide sufficient common
knowledge for recognition across domains on NICO++.
To evaluate DG datasets in-depth, we investigate dis-
tribution shifts on images (covariate shift) and common
knowledge for category discrimination across domains
(concept agreement) within them. Formally, we present
quantification for covariate shift and the opposite of concept
agreement, namely concept shift, via two novel metrics. We
propose two novel generalization bounds and analyze them
from the perspective of data construction instead of models.
Through these bounds, we prove that limited concept shift
and significant covariate shift favor the evaluation capabil-
ity for generalization.
Moreover, a critical yet common problem in DG is the
model selection and the potential unfairness in the compar-
ison caused by leveraging the knowledge of target data to
choose hyperparameters that favors test performance [3,27].
This issue is exacerbated by the notable variance of test per-
formance with various algorithm irrelevant hyperparame-
ters on current DG datasets. Intuitively, strong and unsta-
ble concept shift such as confusing mapping relations from
images to labels across domains embarrasses training con-
vergence and enlarges the variance.
We conduct extensive experiments on three levels. First,
we evaluate NICO++and current DG datasets with the pro-
posed metrics and show the superiority of NICO++in eval-
uation capability, as shown in Figure 1. Second, we con-
duct copious experiments on NICO++to benchmark cur-
rent representative methods with the proposed protocols.
Results show that the room for improvement of generaliza-
tion methods on NICO++is spacious. Third, we show that
NICO++helps alleviate the issue by squeezing the possible
improvement space of oracle leaking and contributes as a
fairer benchmark to the evaluation of DG methods, which
meets the proposed metrics. |
Yadav_Habitat-Matterport_3D_Semantics_Dataset_CVPR_2023 | Abstract
We present the Habitat-Matterport 3D Semantics
(HM3DS EM) dataset. HM3DS EMis the largest dataset
of 3D real-world spaces with densely annotated seman-
tics that is currently available to the academic commu-
nity. It consists of 142,646 object instance annotations
across 216 3D spaces and 3,100 rooms within those spaces.
The scale, quality, and diversity of object annotations far
exceed those of prior datasets. A key difference setting
apart HM3DS EMfrom other datasets is the use of tex-
ture information to annotate pixel-accurate object bound-
aries. We demonstrate the effectiveness of HM3DS EM
dataset for the Object Goal Navigation task using differ-
ent methods. Policies trained using HM3DS EMperform
outperform those trained on prior datasets. Introduction
ofHM3DS EMin the Habitat ObjectNav Challenge lead
to an increase in participation from 400 submissions in
2021 to 1022 submissions in 2022. Project page: https:
//aihabitat.org/datasets/hm3d-semantics/
| 1. Introduction
Over the recent past, work on acquiring and semantically
annotating datasets of real-world spaces has significantly
accelerated research into embodied AI agents that can per-
ceive, navigate and interact with realistic indoor scenes [ 1–5].
However, the acquisition of such datasets at scale is a labori-
ous process. HM3D [ 5] which is one of the largest available
datasets with 1000 high-quality and complete indoor space
reconstructions, reportedly required 800+ hours of human
effort to carry out mainly data curation and verification of
3D reconstructions. Moreover, dense semantic annotation of
such acquired spaces remains incredibly challenging.
We present the Habitat-Matterport 3D Dataset Seman-
tics ( HM3DS EM). This dataset provides a dense semantic
annotation ‘layer’ augmenting the spaces from the original
*Equal Contribution, Correspondence: [email protected]
†Equal ContributionHM3D dataset. This semantic ‘layer’ is implemented as a
set of textures that encode object instance semantics and
cluster objects into distinct rooms. The semantics include
architectural elements (walls, floors, ceilings), large objects
(furniture, appliances etc.), as well as ‘stuff’ categories (ag-
gregations of smaller items such as books on bookcases).
This semantic instance information is specified in the seman-
tic texture layer, providing pixel-accurate correspondences
to the original acquired RGB surface texture and underlying
geometry of the objects.
TheHM3DS EMdataset currently contains annotations
for142,646object instances distributed across 216spaces
and3,100rooms within those spaces. Figure 1 shows some
examples of the semantic annotations from the HM3DS EM
dataset. The achieved scale is larger than prior work (2.8x rel-
ative to Matterport3D [ 6] (MP3D) and 2.1x relative to ARK-
itScenes [ 7] in terms of total number of object instances).
We demonstrate the usefulness of HM3DS EMon the Ob-
jectGoal navigation task. Training on HM3DS EMresults
in higher cross-dataset generalization performance. Surpris-
ingly, the policies trained on HM3DS EMperform better on
average across scene datasets compared to training on the
datasets themselves. We also show that increasing the size of
training datasets improve the navigation performance. These
results highlight the importance of improving the quality
and scale of 3D datasets with dense semantic annotations for
improving downstream embodied AI task performance.
|
Xu_Visual-Tactile_Sensing_for_In-Hand_Object_Reconstruction_CVPR_2023 | Abstract
Tactile sensing is one of the modalities humans rely on
heavily to perceive the world. Working with vision, this
modality refines local geometry structure, measures defor-
mation at the contact area, and indicates the hand-object
contact state. With the availability of open-source tactile
sensors such as DIGIT, research on visual-tactile learning
is becoming more accessible and reproducible. Leverag-
ing this tactile sensor, we propose a novel visual-tactile
in-hand object reconstruction framework VTacO , and ex-
tend it to VTacOH for hand-object reconstruction. Since
our method can support both rigid and deformable ob-
ject reconstruction, no existing benchmarks are proper for
the goal. We propose a simulation environment, VT-Sim,
which supports generating hand-object interaction for both
rigid and deformable objects. With VT-Sim, we gener-
ate a large-scale training dataset and evaluate our method
on it. Extensive experiments demonstrate that our pro-
posed method can outperform the previous baseline meth-
ods qualitatively and quantitatively. Finally, we directly ap-
ply our model trained in simulation to various real-worldtest cases, which display qualitative results. Codes, mod-
els, simulation environment, and datasets are available at
https://sites.google.com/view/vtaco/ .
| 1. Introduction
Human beings have a sense of object geometry by seeing
and touching, especially when the object is in manipulation
and undergoes a large portion of occlusion, where visual
information is not enough for the details of object geom-
etry. In such cases, vision-based tactile sensing is a good
supplement as a way of proximal perception. In the past,
few vision-based tactile sensors were commercially avail-
able or open-source, so the visual-tactile sensing techniques
could not be widely studied. Previous works [27, 34] on
in-hand object reconstruction either studied rigid objects or
were limited to simple objects with simple deformation.
* indicates equal contributions.
§ Cewu Lu is the corresponding author, the member of Qing Yuan
Research Institute and MoE Key Lab of Artificial Intelligence, AI Institute,
Shanghai Jiao Tong University, China and Shanghai Qi Zhi institute.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8803
Vision-based tactile sensors [6, 15, 30, 33] can produce
colorful tactile images indicating local geometry and defor-
mation in the contact areas. In this work, we mainly work
with DIGIT [15] as it is open-source for manufacture and is
easier to reproduce sensing modality. With tactile images,
we propose a novel Visual- Tactile in-hand Object recon-
struction framework named VTacO . VTacO reconstructs
the object geometry with the input of a partial point cloud
observation and several tactile images. The tactile and ob-
ject features are extracted by neural networks and fused
in the Winding Number Field (WNF) [13], and the object
shape is extracted by Marching Cubes algorithm [17]. WNF
can represent the object shape with open and thin structures.
The poses of tactile sensors can be determined either by
markers attached or by hand kinematics. By default, VTacO
assumes the tactile sensor poses can be obtained indepen-
dently, but we also discuss how to obtain the tactile sensor
poses alongside the object with hand pose estimation. The
corresponding method is named VTacOH .
With tactile information, we can enhance pure visual in-
formation from three aspects: (1) Local geometry refine-
ment . We use tactile sensing as proximal perception to
complement details of local geometry. (2) Deformation at
contact area . Objects, even those we consider rigid, can
undergo considerable deformation given external forces ex-
erted by hand. (3) Hand-object contact state . Tactile sen-
sors indicate whether the hand is in contact with the object’s
surface. To demonstrate such merits, we conduct the object
reconstruction tasks in both rigid and non-rigid settings.
Since obtaining the ground truth of object deformation
in the real world is hard, we first synthesize the training
data from a simulator. DIGIT has an official simulation im-
plementation, TACTO [29]. However, it is based on py-
bullet [5], which has limited ability to simulate deformable
objects. Thus, we implement a tactile simulation environ-
ment VT-Sim in Unity. In VT-Sim, we generate hand poses
with GraspIt! [18], and simulate the deformation around
the contact area with an XPBD-based method. In the sim-
ulation, we can easily obtain depth image, tactile image,
DIGIT pose, and object WNF as training samples for both
rigid and non-rigid objects.
To evaluate the method, we compare the proposed
visual-tactile models with its visual-only setting, and the
previous baseline 3D Shape Reconstruction from Vision
and Touch (3DVT) [23]. Extensive experiments show that
our method can achieve both quantitative and qualitative
improvements on baseline methods. Besides, since we
make the tactile features fused with winding number predic-
tion, we can procedurally gain finer geometry reconstruc-
tion results by incrementally contacting different areas of
objects. It can be useful for robotics applications [8, 22].
Then, we directly apply the model trained with synthesis
data to the real world. It shows great generalization ability.We summarize our contributions as follows:
• A visual-tactile learning framework to reconstruct an
object when it is being manipulated. We provide the
object-only version VTacO, and the hand-object ver-
sion VTacOH.
• A simulation environment, VT-Sim, which can gener-
ate training samples. We also validate the generaliza-
tion ability of the models trained on the simulated data
to the real-world data.
|
Yuan_Robust_Test-Time_Adaptation_in_Dynamic_Scenarios_CVPR_2023 | Abstract
Test-time adaptation (TTA) intends to adapt the pre-
trained model to test distributions with only unlabeled test
data streams. Most of the previous TTA methods have
achieved great success on simple test data streams such as
independently sampled data from single or multiple distri-
butions. However, these attempts may fail in dynamic sce-
narios of real-world applications like autonomous driving,
where the environments gradually change and the test data
is sampled correlatively over time. In this work, we ex-
plore such practical test data streams to deploy the model
on the fly, namely practical test-time adaptation (PTTA).
To do so, we elaborate a Robust Test-Time Adaptation
(RoTTA) method against the complex data stream in PTTA.
More specifically, we present a robust batch normalization
scheme to estimate the normalization statistics. Meanwhile,
a memory bank is utilized to sample category-balanced data
with consideration of timeliness and uncertainty. Further, to
stabilize the training procedure, we develop a time-aware
reweighting strategy with a teacher-student model. Exten-
sive experiments prove that RoTTA enables continual test-
time adaptation on the correlatively sampled data streams.
Our method is easy to implement, making it a good choice
for rapid deployment. The code is publicly available at
https://github.com/BIT-DA/RoTTA
| 1. Introduction
In recent years, many machine learning problems have
made considerable headway with the success of deep neu-
ral networks [13, 22, 33, 38]. Unfortunately, the perfor-
mance of deep models drops significantly when training
data and testing data come from different distributions [59],
which limits their utility in real-world applications. To re-
duce the distribution shift, a handful of works focus on
transfer learning field [56], in particular, domain adapta-
tion (DA) [17, 42, 45, 48, 69, 72] or domain generalization
(DG) [40,41,52,71,83], in which one or more different but
Corresponding author
Test data streamContinual TTANon-i.i.d.TTAPractical TTACategoryDistributionFully TTA
Correlation samplingDistributionchangingFigure 1. We consider the practical test-time adaptation (TTA)
setup and compare it with related ones. First, Fully TTA [70]
adapts models on a fixed test distribution with an independently
sampled test stream. Then, on this basis, Continual TTA [73] takes
the continually changing distributions into consideration. Next,
Non-i.i.d. TTA [19] tries to tackle the correlatively sampled test
streams on a single test distribution, where the label distribution
among a batch of data deviates from that of the test distribution.
To be more practical, Practical TTA strives to connect both worlds:
distribution changing and correlation sampling.
related labeled datasets (a.k.a. source domain) are collected
to help the model generalize well to unlabeled or unseen
samples in new datasets (a.k.a. target domain).
While both DA and DG have extensively studied the
problem of distribution shifts, they typically assume acces-
sibility to the raw source data. However, in many practical
scenarios like personal consumption records, the raw data
should not be publicly available due to data protection reg-
ulations. Further, existing methods have to perform heavy
backward computation, resulting in unbearable training
costs. Test-time adaptation (TTA) [3,11,16,24,26,54,65,81]
attempts to address the distribution shift online at test time
with only unlabeled test data streams. Unequivocally, TTA
has drawn widespread attention in a variety of applications,
e.g., 2D/3D visual recognition [2, 29, 49, 65, 82], multi-
modality [63, 64] and document understanding [15].
Prior TTA studies [7, 20, 70, 73] mostly concentrate on
a simple adaptation scenario, where test samples are inde-
pendently sampled from a fixed target domain. To name a
few, Sun et al. [65] adapt to online test samples drawn from
a constant or smoothly changing distribution with an auxil-
iary self-supervised task. Wang et al. [70] adapt to a fixed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
15922
Table 1. Comparison between our proposed practical test-time adaptation (PTTA) and related adaptation settings.
SettingAdaptation Stage Available Data Test Data Stream
Train Test Source Target Distribution Sampling Protocol
Domain Adaptation ! % ! ! - -
Domain Generalization ! % ! % - -
Test-Time Training [65] ! ! ! ! stationary independently
Fully Test-Time Adaptation [70] % ! % ! stationary independently
Continual Test-Time Adaptation [73] % ! % ! continually changing independently
Non-i.i.d. Test-Time Adaptation [5, 19] % ! % ! stationary correlatively
Practical Test-Time Adaptation (Ours) % ! % ! continually changing correlatively
target distribution by performing entropy minimization on-
line. However, such an assumption is violated when the test
environments change frequently [73]. Later on, Boudiaf et
al. [5] and Gong et al. [19] consider the temporal correlation
ship within test samples. For example, in autonomous driv-
ing, test samples are highly correlated over time as the car
will follow more vehicles on the highway or will encounter
more pedestrians in the streets. More realistically, the data
distribution changes as the surrounding environment alerts
in weather, location, or other factors. In a word, distribution
change and data correlation occur simultaneously in reality.
Confronting continually changing distributions, tradi-
tional algorithms like pseudo labeling or entropy minimiza-
tion become more unreliable as the error gradients cumu-
late. Moreover, the high correlation among test samples re-
sults in the erroneous estimation of statistics for batch nor-
malization and collapse of the model. Driven by this analy-
sis, adapting to such data streams will encounter two major
obstacles: 1) incorrect estimation in the batch normaliza-
tion statistics leads to erroneous predictions of test samples,
consequently resulting in invalid adaptation; 2) the model
will easily or quickly overfit to the distribution caused by
the correlative sampling. Thus, such dynamic scenarios are
pressing for a new TTA paradigm to realize robust adapta-
tion.
In this work, we launch a more realistic TTA setting,
where distribution changing and correlative sampling oc-
cur simultaneously at the test phase. We call this Practical
Test-Time Adaptation , or briefly, PTTA . To understand more
clearly the similarities and differences between PTTA and
the previous setups, we visualize them in Figure 1 and sum-
marize them in Table 1. To conquer this challenging prob-
lem, we propose a Robust Test-TimeAdaptation ( RoTTA )
method, which consists of three parts: 1) robust statistics es-
timation, 2) category-balanced sampling considering time-
liness and uncertainty and 3) time-aware robust training.
More concretely, we first replace the erroneous statistics of
the current batch with global ones maintained by the expo-
nential moving average. It is a more stable manner to esti-
mate the statistics in BatchNorm layers. Then, we simulate
a batch of independent-like data in memory with category-
balanced sampling while considering the timeliness and un-
certainty of the buffered samples. That is, samples that arenewer and less uncertain are kept in memory with higher
priority. With this batch of category-balanced, timely and
confident samples, we can obtain a snapshot of the current
distribution. Finally, we introduce a time-aware reweight-
ing strategy that considers the timeliness of the samples in
the memory bank, with a teacher-student model to perform
robust adaptation. With extensive experiments, we demon-
strate that RoTTA can robustly adapt in the practical setup,
i.e., PTTA.
In a nutshell, our contributions can be summarized as:
• We propose a new test-time adaptation setup that
is more suitable for real-world applications, namely
practical test-time adaptation (PTTA). PTTA considers
both distribution changing and correlation sampling.
• We benchmark the performance of prior methods in
PTTA and uncover that they only consider one aspect
of the problem, resulting in ineffective adaptation.
• We propose a robust test-time adaptation method
(RoTTA), which has a more comprehensive considera-
tion of PTTA challenges. Ease of implementation and
effectiveness make it a practical deployment option.
• We extensively demonstrate the practicality of PTTA
and the effectiveness of RoTTA on common TTA
benchmarks [23], i.e., CIFAR-10-C and CIFAR-100-
C and a large-scale DomainNet [58] dataset. RoTTA
obtains state-of-the-art results, outperforming the best
baseline by a large margin (reducing the averaged
classification error by over 5.9%, 5.5% and 2.2% on
CIFAR-10-C, CIFAR-100-C and DomainNet, respec-
tively).
|
Zhang_Frame_Flexible_Network_CVPR_2023 | Abstract
Existing video recognition algorithms always conduct
different training pipelines for inputs with different frame
numbers, which requires repetitive training operations and
multiplying storage costs. If we evaluate the model us-
ing other frames which are not used in training, we ob-
serve the performance will drop significantly (see Fig. 1),
which is summarized as Temporal Frequency Deviation
phenomenon. To fix this issue, we propose a general frame-
work, named Frame Flexible Network (FFN), which not
only enables the model to be evaluated at different frames to
adjust its computation, but also reduces the memory costs of
storing multiple models significantly. Concretely, FFN in-
tegrates several sets of training sequences, involves Multi-
Frequency Alignment (MFAL) to learn temporal frequency
invariant representations, and leverages Multi-Frequency
Adaptation (MFAD) to further strengthen the representa-
tion abilities. Comprehensive empirical validations us-
ing various architectures and popular benchmarks solidly
demonstrate the effectiveness and generalization of FFN
(e.g., 7.08/5.15/2.17 %performance gain at Frame 4/8/16
on Something-Something V1 dataset over Uniformer). Code
is available at https://github.com/BeSpontaneous/FFN.
| 1. Introduction
The growing number of online videos boosts the research
on video recognition, laying a solid foundation for deep
learning which requires massive data. Compared with im-
age classification, video recognition methods need a series
of frames to represent the video which scales the compu-
tation. Thus, the efficiency of video recognition methods
has always been an essential factor in evaluating these ap-
proaches. One existing direction to explore efficiency is
designing lightweight networks [9, 40] which are hardware
friendly. Even if they increase the efficiency with an accept-
able performance trade-off, these methods cannot make fur-
ther customized adjustments to meet the dynamic-changing
resource constraint in real scenarios. In community, there
5 25 45 65 85
GFLOPs152025303540455055Acc (%)
TSM
TSM_ST
SlowFast
SlowFast_ST
Uniformer
Uniformer_ST(a) Temporal Frequency Deviation
phenomenon exists in various video
recognition architectures.
5 35 65 95 125
GFLOPs15202530354045Acc (%)
TSM(R18)
TSM(R18)_ST
TSM(R50)
TSM(R50)_ST
TSM(R101)
TSM(R101)_ST(b) Temporal Frequency Devia-
tion phenomenon exists in different
depths of deep networks.
Figure 1. Temporal Frequency Deviation phenomenon widely
exists in video recognition. All methods are trained with high
frame number and evaluated at other frames to compare with Sep-
arated Training (ST) which individually trains the model at differ-
ent frames on Something-Something V1 dataset.
are two lines of research being proposed to resolve this is-
sue. The first one is to design networks that can execute at
various depths [10] or widths [37] to adjust the computa-
tion from the model perspective. The other line of research
considers modifying the resolutions of input data [15,34] to
accommodate the cost from the data aspect. However, these
methods are carefully designed for 2D CNNs, which may
hinder their applications on video recognition where 3D
CNNs and Transformer methods are crucial components.
Different from image-related tasks, we need to sample
multiple frames to represent the video, and the computa-
tional costs will grow proportionally to the number of sam-
pled frames. Concretely, standard protocol trains the same
network with different frames separately to obtain multi-
ple models with different performances and computations.
This brings challenges to applying these networks on edge
devices as the parameters will be multiplied if we store all
models, and downloading and offloading models to switch
them will cost non-negligible time. Moreover, the same
video may be sampled at various temporal rates on differ-
ent platforms, employing a single network that is trained at
a certain frame number for inference cannot resist the vari-
ance of frame numbers in real scenarios.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10504
Training the model with a high frame number (i.e.,
high temporal frequency) and directly evaluating it at fewer
frames (i.e., low temporal frequency) to adjust the cost is
a naive and straightforward solution. To test its effective-
ness, we compare it with Separated Training (ST) which
trains the model at different temporal frequency individu-
ally and tests it with the corresponding frame. We conduct
experiments on 2D-network TSM [18], 3D-network Slow-
Fast [6] and Transformer-network Uniformer [16], and find
obvious performance gaps between the inference results and
ST from Fig. 1, which means these methods will exhibit sig-
nificantly inferior performance if they are not evaluated at
the frame number used in training. Further, we conduct the
same experiments on different depths of deep networks and
a similar phenomenon appears. We denote this generally
existing phenomenon as Temporal Frequency Deviation.
The potential reason for Temporal Frequency Devia-
tion has been explored in Sec. 3 and briefly summarized
as the shift in normalization statistics. To address this is-
sue, we propose a general framework, named Frame Flexi-
ble Network (FFN), which only requires one-time training,
but can be evaluated at multiple frame numbers with great
flexibility. We import several input sequences with differ-
ent sampled frames to FFN during training and propose
Multi-Frequency Alignment (MFAL) to learn the temporal
frequency invariant representations for robustness towards
frame change. Moreover, we present Multi-Frequency
Adaptation (MFAD) to further strengthen the representation
abilities of the sub-networks which helps FFN to exhibit
strong performance at different frames during inference.
Although normalization shifting problem [36, 37] and
resolution-adaptive networks [15, 34] have been studied,
we stress that designing frame flexible video recognition
frameworks to accommodate the costs and save parameters
is non-trivial and has practical significance for the follow-
ing reasons. First, prior works [15, 34] carefully analyzed
the detailed structure of 2D convolutions in order to pri-
vatize the weights for different scale images. While our
method does not touch the specific design of the spatial-
temporal modeling components and shares their weights for
inputs with different frames. This procedure not only en-
ables our method to be easily applied to various architec-
tures (2D/3D/Transformer models), but also enforces FFN
to learn temporal frequency invariant representations. Sec-
ond, it is, indeed, a common practice to conduct Separated
Training (ST) in video recognition, which needs multiply-
ing memory costs to store individual models, and the mod-
els are hard to resist the variance in temporal frequency
which limits their applications in actual practice. While
FFN provides a feasible solution to these challenges which
significantly reduces the memory costs of storing multiple
models and can be evaluated at different frames to adjust
the cost with even higher accuracy compared to ST.With the proposed framework, we can resolve Tempo-
ral Frequency Deviation and enable these methods to adjust
their computation based on the current resource budget by
sampling different frames, trimming the storage costs of ST
remarkably. Moreover, we provide a naive solution that en-
ables FFN to be evaluated at any frame and increases its
flexibility during inference. Validation results prove that
FFN outperforms ST even at frames that are not used in
training. The contributions are summarized as follows:
• We reveal the phenomenon of Temporal Frequency
Deviation that widely exists in video recognition. It is
detailedly analyzed and practically inspires our study.
• We propose a general framework Frame Flexible Net-
work (FFN) to resolve Temporal Frequency Devia-
tion. We design Multi-Frequency Alignment (MFAL)
to learn temporal frequency invariant representations
and present Multi-Frequency Adaptation (MFAD) to
further strengthen the representation abilities.
• Comprehensive empirical validations show that FFN,
which only requires one-shot training, can adjust its
computation by sampling different frames and outper-
form Separated Training (ST) at different frames on
various architectures and datasets, reducing the mem-
ory costs of storing multiple models significantly.
|
Yang_Revisiting_Weak-to-Strong_Consistency_in_Semi-Supervised_Semantic_Segmentation_CVPR_2023 | Abstract
In this work, we revisit the weak-to-strong consistency
framework, popularized by FixMatch from semi-supervised
classification, where the prediction of a weakly perturbed im-
age serves as supervision for its strongly perturbed version.
Intriguingly, we observe that such a simple pipeline already
achieves competitive results against recent advanced works,
when transferred to our segmentation scenario. Its success
heavily relies on the manual design of strong data augmen-
tations, however, which may be limited and inadequate to
explore a broader perturbation space. Motivated by this,
we propose an auxiliary feature perturbation stream as a
supplement, leading to an expanded perturbation space. On
the other, to sufficiently probe original image-level augmen-
tations, we present a dual-stream perturbation technique,
enabling two strong views to be simultaneously guided by
a common weak view. Consequently, our overall Unified
Dual-Stream Perturbations approach (UniMatch) surpasses
all existing methods significantly across all evaluation proto-
cols on the Pascal, Cityscapes, and COCO benchmarks. Its
superiority is also demonstrated in remote sensing interpre-
tation and medical image analysis. We hope our reproduced
FixMatch and our results can inspire more future works.
| 1. Introduction
Semantic segmentation aims to provide pixel-level pre-
dictions to images, which can be deemed as a dense classi-
fication task and is fundamental to real-world applications,
e.g., autonomous driving. Nevertheless, conventional fully-
supervised scenario [43, 73, 77] is extremely hungry for deli-
cately labeled images by human annotators, greatly hinder-
ing its broad application to some fields where it is costly
and even infeasible to annotate abundant images. Therefore,
semi-supervised semantic segmentation [56] has been pro-
posed and is attracting increasing attention. Generally, it
wishes to alleviate the labor-intensive process via leverag-
*Corresponding author.
183 366 732 1464
# labeled images (10582 in total)6568717477mIOU (%)
PseudoSeg [ICLR'2021]
CPS [CVPR'2021]
PC2Seg [ICCV'2021]
ReCo [ICLR'2022]
ST++ [CVPR'2022]
U2PL [CVPR'2022]
Reproduced FixMatchFigure 1. Comparison between state-of-the-art methods and our
reproduced FixMatch [55] on the Pascal dataset.
ing a large quantity of unlabeled images, accompanied by a
handful of manually labeled images.
Following closely the research line of semi-supervised
learning (SSL), advanced methods in semi-supervised se-
mantic segmentation have evolved from GANs-based adver-
sarial training paradigm [21, 47, 56] into the widely adopted
consistency regularization framework [13, 19, 28, 29, 49, 61,
81] and reborn self-training pipeline [23, 27, 68, 70]. In this
work, we focus on the weak-to-strong consistency regulariza-
tion framework, which is popularized by FixMatch [55] from
the field of semi-supervised classification, and then impacts
many other relevant tasks [42, 45, 57, 62, 66, 67]. The weak-
to-strong approach supervises a strongly perturbed unlabeled
image xswith the prediction yielded from its correspond-
ing weakly perturbed version xw, as illustrated in Figure 2a.
Intuitively, its success lies in that the model is more likely
to produce high-quality prediction on xw, while xsis more
effective for our model to learn, since the strong perturba-
tions introduce additional information as well as mitigate
confirmation bias [2]. We surprisingly notice that, so long
as coupled with appropriate strong perturbations, FixMatch
can indeed still exhibit powerful generalization capability in
our scenario, obtaining superior results over state-of-the-art
(SOTA) methods, as compared in Figure 1. Thus, we select
this simple yet effective framework as our baseline.
Through investigation of image-level strong perturbations,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
7236
Method# labeled images (10582 in total)
92 183 366 732 1464
w/o any SP 39.5 52.7 65.5 69.2 74.6
w/ CutMix 56.7 67.9 71.9 75.1 78.3
w/ whole SP 63.9 73.0 75.5 77.8 79.2
Table 1. The importance of image-level strong perturbations (SP)
to FixMatch on the Pascal dataset. w/o any SP : directly utilize hard
label of xwto supervise its logits. w/ CutMix : only use CutMix [71]
as a perturbation. w/ whole SP : strong perturbations contain color
transformations from ST++ [68], together with CutMix.
we observe that they play an indispensable role in making
the FixMatch a rather strong competitor in semi-supervised
semantic segmentation. As demonstrated in Table 1, the
performance gap between whether to adopt perturbations is
extremely huge. Greatly inspired by these clues, we hope to
inherit the spirit of strong perturbations from FixMatch, but
also further strengthen them from two different perspectives
and directions, namely expanding a broader perturbation
space , and sufficiently harvesting original perturbations .
Each of these two perspectives is detailed in the following
two paragraphs respectively.
Image-level perturbations, e.g., color jitter and CutMix
[71], include heuristic biases, which actually introduce ad-
ditional prior information into the bootstrapping paradigm
of FixMatch, so as to capture the merits of consistency reg-
ularization. In case not equipped with these perturbations,
FixMatch will be degenerated to a naïve online self-training
pipeline, producing much worse results. Despite its effective-
ness, these perturbations are totally constrained at the image
level, hindering the model to explore a broader perturbation
space and to maintain consistency at diverse levels. To this
end, in order to expand original perturbation space, we de-
sign a unified perturbation framework for both raw images
and extracted features. Concretely, on raw images, similar to
FixMatch, pre-defined image-level strong perturbations are
applied, while for extracted features of weakly perturbed im-
ages, an embarrassingly simple channel dropout is inserted.
In this way, our model pursues the equivalence of predictions
on unlabeled images at both the image and embedding level.
These two perturbation levels can be complementary to each
other. Distinguished from [33, 41], we separate different
levels of perturbations into independent streams to avoid a
single stream being excessively hard to learn.
On the other hand, current FixMatch framework merely
utilizes a single strong view of each unlabeled image in a
mini-batch, which is insufficient to fully exploit the manually
pre-defined perturbation space. Considering this, we present
a simple yet highly effective improvement to the input, where
dual independent strong views are randomly sampled from
the perturbation pool. They are then fed into the student
model in parallel, and simultaneously supervised by theirshared weak view. Such a minor modification even easily
turns the FixMatch baseline into a stronger SOTA framework
by itself. Intuitively, we conjecture that enforcing two strong
views to be close to a common weak view can be regarded as
minimizing the distance between these strong views. Hence,
it shares the spirits and merits of contrastive learning [11,25],
which can learn more discriminative representations and is
proved to be particularly beneficial to our current task [40,
61]. We conduct comprehensive studies on the effectiveness
of each proposed component. Our contributions can be
summarized in four folds:
•We notice that, coupled with appropriate image-level
strong perturbations, FixMatch is still a powerful frame-
work when transferred to the semantic segmentation
scenario. A plainly reproduced FixMatch outperforms
almost all existing methods in our current task.
•Built upon FixMatch, we propose a unified perturba-
tion framework that unifies image-level and feature-
level perturbations in independent streams, to exploit a
broader perturbation space.
•We design a dual-stream perturbation strategy to fully
probe pre-defined image-level perturbation space, as
well as to harvest the merits of contrastive learning for
discriminative representations.
•Our framework that integrates above two components,
surpasses existing methods remarkably across all evalu-
ation protocols on the Pascal, Cityscapes, and COCO.
Notably, it also exhibits strong superiority in medical
image analysis and remote sensing interpretation.
|
Zhang_Improving_Graph_Representation_for_Point_Cloud_Segmentation_via_Attentive_Filtering_CVPR_2023 | Abstract
Recently, self-attention networks achieve impressive per-
formance in point cloud segmentation due to their superior-
ity in modeling long-range dependencies. However, com-
pared to self-attention mechanism, we find graph convolu-
tions show a stronger ability in capturing local geometry
information with less computational cost. In this paper, we
employ a hybrid architecture design to construct our Graph
Convolution Network with Attentive Filtering ( AF-GCN ),
which takes advantage of both graph convolution and self-
attention mechanism. We adopt graph convolutions to ag-
gregate local features in the shallow encoder stages, while
in the deeper stages, we propose a self-attention-like mod-
ule named Graph Attentive Filter (GAF) to better model
long-range contexts from distant neighbors. Besides, to fur-
ther improve graph representation for point cloud segmen-
tation, we employ a Spatial Feature Projection (SFP) mod-
ule for graph convolutions which helps to handle spatial
variations of unstructured point clouds. Finally, a graph-
shared down-sampling and up-sampling strategy is intro-
duced to make full use of the graph structures in point cloud
processing. We conduct extensive experiments on multi-
ple datasets including S3DIS, ScanNetV2, Toronto-3D, and
ShapeNetPart. Experimental results show our AF-GCN ob-
tains competitive performance.
| 1. Introduction
With the rapid development of 3D sensing technolo-
gies (such as LiDARs and RGB-D cameras), 3D point
clouds have demonstrated great potential in many applica-
tions such as robotics, autonomous driving, virtual reality
and augmented reality [10]. Consequently, point cloud seg-
mentation has attracted more and more attention. Unlike
regular pixel grids in 2D images, 3D points in point clouds
are irregular and unstructured, thereby posing significant
Corresponding author: Wei Gao.
(a) Input (b) Ground Truth
(c) Low -level Features (d) High- level FeaturesFigure 1. (a) and (b) are the input point cloud and corresponding
semantic labels, respectively. (c) and (d) are the visualization of
the low-level and high-level point features after the first and last
down-sampling, respectively. Differences in color indicate differ-
ences in features. As shown in (c), neighbors in the same object
may have low feature correlations due to the differences in RGB
attributes or geometry structures. As shown in (d), points after sev-
eral down-sampling are sparse and the distant neighbors in high-
level feature aggregation should be filtered because of containing
possible irrelevant information.
challenges for point cloud segmentation.
Several researches [18, 21, 46] adopt graph convolution
networks to utilize the topological structure of point cloud
for segmentation. Graph convolution networks learn fea-
tures from points and their neighbors for better captur-
ing local geometric features while maintaining permuta-
tion invariance, which have intrinsic advantages for han-
dling non-Euclidean point cloud data. Furthermore, many
works [17, 44, 51, 59] improve the graph convolution net-
works by proposing well-designed convolution kernels and
get promising performance in point cloud segmentation.
Recently inspired by the great success of vision trans-
formers [8,11,23,34,56], several works [9,15,36,52,55,58]
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
1244
introduce self-attention mechanism into point cloud anal-
ysis for its superiority in modeling long-range dependen-
cies and high-level relations, which obtain significant per-
formance improvement, especially in point cloud segmen-
tation. However, self-attention mechanism exhibits certain
limitations in capturing local geometry information. Com-
pared with graph convolutions, self-attention mechanisms
require additional computation for feature correlations, and
assign large weights to neighbors which have high feature
correlations. As illustrated in Figure 1, points in low-level
feature learning phases are dense and low-level features are
mainly extracted from the colors and geometry structures
(like edges, corners and surfaces). Therefore, self-attention
mechanisms are inefficient in low-level feature aggregation
and may neglect information about neighbors which have
considerable differences in colors or geometry structures.
To exploit the advantages of graph convolution in cap-
turing local geometry information and self-attention mech-
anism in modeling long-range contexts simultaneously, we
design a hybrid network, namely Graph Convolution Net-
work with Attentive Filtering (AF-GCN). In the shallow
stages of the encoder, we adopt graph convolutions to ag-
gregate local geometry information. While in the deeper
stages, we propose a self-attention-like module in the graph
convolution form called Graph Attentive Filter (GAF) to
improve graph representation for point cloud segmentation.
Different from previous studies [9, 15, 36, 50, 58], our pro-
posed Graph Attentive Filter estimates the correlation be-
tween the points from both features and spatial structures in-
formation, then suppresses irrelevant information from the
distant neighbors to better capture high-level relations.
To further improve our graph convolution networks for
point cloud segmentation, we adopt a Spatial Feature Pro-
jection (SFP) module for graph convolutions. The spa-
tial feature projection module projects the spatial informa-
tion of points into the feature space, which helps graph
convolutions with isotropic kernels to model spatial vari-
ations effectively. Moreover, we design a graph-shared
down-sampling and up-sampling strategy to better utilize
the graph structures in the decoder. In general, our key con-
tributions are summarized as follows:
• We construct a hierarchical graph convolution network
AF-GCN with a hybrid architecture design for point
cloud segmentation, which takes advantage of graph
convolution and self-attention mechanism.
• We propose a novel Graph Attentive Filters module to
suppress irrelevant information from distant neighbors
by estimating the correlation between the points from
both features and spatial structure information.
• We employ a Spatial Feature Projection module for
graph convolutions to handle the spatial variation of ir-
regular point clouds. To better exploit the graph struc-tures, we design a graph-shared down-sampling and
up-sampling strategy.
• Experimental results demonstrate our model achieves
state-of-the-art performance on multiple point cloud
segmentation datasets. Ablation studies also verify the
effectiveness of each proposed component.
|
Yu_Accidental_Light_Probes_CVPR_2023 | Abstract
Recovering lighting in a scene from a single image is
a fundamental problem in computer vision. While a mir-
ror ball light probe can capture omnidirectional lighting,
light probes are generally unavailable in everyday images.
In this work, we study recovering lighting from accidental
light probes (ALPs)—common, shiny objects like Coke cans,
which often accidentally appear in daily scenes. We propose
a physically-based approach to model ALPs and estimate
lighting from their appearances in single images. The main
idea is to model the appearance of ALPs by photogram-
metrically principled shading and to invert this process via
differentiable rendering to recover incidental illumination.
We demonstrate that we can put an ALP into a scene to allow
high-fidelity lighting estimation. Our model can also recover
lighting for existing images that happen to contain an ALP *.
I’d rather be Shiny. — Tamatoa from Moana, 2016
| 1. Introduction
Traditionally, scene lighting has been captured through
the use of light probes, typically a chromium mirror ball;
their shape (perfect sphere) and material (perfect mirror)
allow for a perfect measurement of all light that intersects
the probe. Unfortunately, perfect light probes rarely appear
in everyday photos, and it is unusual for people to carry
them around to place in scenes. Fortunately, many everyday
objects share the desired properties of light probes: Coke
cans, rings, and thermos bottles are shiny (high reflectance)
and curved (have a variety of surface normals). These ob-
jects can reveal a significant amount of information about
the scene lighting, and can be seen as imperfect “accidental”
light probes (e.g., the Diet Pepsi in Figure 1). Unlike perfect
light probes, they can easily be found in casual photos or
acquired and placed in a scene. In this paper, we explore us-
ing such everyday, shiny, curved objects as Accidental Light
Probes (ALPs) to estimate lighting from a single image.
*Project website: https://kovenyu.com/ALP
Figure 1. (Left) From an image that has an accidental light probe
(a Diet Pepsi can), we insert a virtual object (a Diet Coke can)
with estimated lighting using the accidental light probe (Middle),
and using estimated lighting from a recent state-of-the-art lighting
estimation method [49] (Right). Note how our method better re-
lights the inserted can to produce an appearance consistent with the
environment (e.g., the highlight reflection and overall intensity).
In general, recovering scene illumination from a single
view is fundamental for many computer vision applications
such as virtual object insertion [9], relighting [46], and pho-
torealistic data augmentation [51]. Yet, it remains an open
problem primarily due to its highly ill-posed nature. Images
are formed through a complex interaction between geometry,
material, and lighting [21], and without precise prior knowl-
edge of a scene’s geometry or materials, lighting estimation
is extremely under-constrained. For example, scenes that
consist primarily of matte materials reveal little information
about lighting, since diffuse surfaces behave like low-pass
filters on lighting during the shading process [38], eliminat-
ing high-frequency lighting information. To compensate for
the missing information, the computer vision community has
explored using deep learning to extract data-driven priors for
lighting estimation [14, 44]. However, these methods gener-
ally do not leverage physical measurements to address these
ambiguities, yet physical measurements can offer substantial
benefits in such an ill-posed setting.
For images with ALPs, we propose a physically-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12521
modeling approach for lighting estimation. The main idea is
to model the ALP appearance using physically-based shad-
ing and to invert this process to estimate lighting. This
inversion process involves taking an input image, estimating
the ALP’s 6D pose and scale, and then using the object’s
surface geometry and material to infer lighting. Compared to
purely data-driven learning approaches that rely on diverse,
high-quality lighting datasets, which are hard to acquire, our
physically-based approach generalizes to different indoor
and outdoor scenes.
To evaluate this technique, we collect a test set of real
images, where we put ALPs in daily scenes and show that
our approach can estimate high-fidelity lighting. We also
demonstrate lighting estimation and object insertion based
on existing images (Figure 1).
In summary, we make the following three contributions:
•We propose the concept of accidental light probes
(ALPs), which can provide strong lighting cues in ev-
eryday scenes and casual photos.
•We develop a physically-based approach for lighting
estimation for images with an ALP and show improved
visual performance compared to existing light estima-
tion techniques.
•We collect a dataset of ALPs and a dataset of images
with ALPs and light probes in both indoor and out-
door scenes. We demonstrate that our physically-based
model outperforms existing methods on these datasets.
|
Zhang_Object_Detection_With_Self-Supervised_Scene_Adaptation_CVPR_2023 | Abstract
This paper proposes a novel method to improve the per-
formance of a trained object detector on scenes with fixed
camera perspectives based on self-supervised adaptation.
Given a specific scene, the trained detector is adapted using
pseudo-ground truth labels generated by the detector itself
and an object tracker in a cross-teaching manner. When
the camera perspective is fixed, our method can utilize the
background equivariance by proposing artifact-free object
mixup as a means of data augmentation, and utilize accu-
rate background extraction as an additional input modal-
ity. We also introduce a large-scale and diverse dataset
for the development and evaluation of scene-adaptive ob-
ject detection. Experiments on this dataset show that our
method can improve the average precision of the original
detector, outperforming the previous state-of-the-art self-
supervised domain adaptive object detection methods by
a large margin. Our dataset and code are published at
https://github.com/cvlab-stonybrook/scenes100.
| 1. Introduction
The need to detect objects in video streams from station-
ary cameras arises in many computer vision applications,
including video surveillance and autonomous retail. In gen-
eral, different applications require the detection of different
object categories, and each computer-vision-based product
will have its own detector. However, for a specific product,
there is typically a single detector that will be used for many
cameras/scenes. For example, a typical video surveillance
product would use the same detector to detect pedestrians
and vehicles for network cameras installed at different loca-
tions. Unfortunately, a single detector might not work well
for all scenes, leading to trivial and unforgiving mistakes.
This fundamental problem of many computer vision
products stems from the limited generalization power of a
single model, due to limited training data, limited model ca-
pacity, or both. One can attempt to address this problem byusing more training data, but it will incur additional cost for
data collection and annotation. Furthermore, in many cases,
due to the low latency requirement or the limited comput-
ing resources for inference, a product is forced to use a very
lightweight network, and this network will have limited rep-
resentation capacity to generalize across many scenes.
In this paper, instead of having a single scene-generic
detector, we propose using scene-specific detectors. This
yields higher detection performance as each detector is
customized for a specific scene, and allows us to use a
lightweight model without sacrificing accuracy as each de-
tector is only responsible for one scene.
Obtaining scene-specific detectors, however, is very
challenging. A trivial approach is to train a detector for
each scene separately, but this requires an enormous amount
of annotated training data. Instead, we propose a self-
supervised method to adapt a pre-trained detector to each
scene. Our method records the unlabeled video frames
in the past, uses the trained detector to detect objects in
those frames, and generates augmented training data based
on those detections. Although the detections made by the
pre-trained model can be noisy, they can still be useful for
generating pseudo annotated data. We further extend those
pseudo bounding boxes by applying object tracking [2, 57]
along the video timeline, aiming to propagate the detections
to adjacent frames to recover some of the false negatives not
returned by the detector. We also use multiple detectors to
obtain the pseudo labels and train the detector in a cross-
teaching manner, taking the advantage of the ensemble of
models [13, 24].
Exploiting the stationary nature of the camera, we pro-
pose two additional techniques to boost the detection perfor-
mance: location-aware mixup and background-augmented
input. The former is to generate more samples during train-
ing through object mixup [76] that contains less artifacts,
based on the aforementioned pseudo boxes generated from
detection and tracking. The latter involves estimating the
background image and fusing it with the detector’s input.
In short, the main contribution of our paper is a novel
framework that utilizes self-supervision, location-aware ob-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21589
ject mixup, and background modeling to improve the detec-
tion performance of a pre-trained object detector on scenes
with stationary cameras. We also contribute a large scale
and diverse dataset for the development of scene adap-
tive object detection, which contains sufficient quantity and
quality annotations for evaluation.
|
Zhang_Frequency-Modulated_Point_Cloud_Rendering_With_Easy_Editing_CVPR_2023 | Abstract
We develop an effective point cloud rendering pipeline
for novel view synthesis, which enables high fidelity local
detail reconstruction, real-time rendering and user-friendly
editing. In the heart of our pipeline is an adaptive fre-
quency modulation module called Adaptive Frequency Net
(AFNet) , which utilizes a hypernetwork to learn the local
texture frequency encoding that is consecutively injected
into adaptive frequency activation layers to modulate the
implicit radiance signal. This mechanism improves the fre-
quency expressive ability of the network with richer fre-
quency basis support, only at a small computational bud-
get. To further boost performance, a preprocessing mod-
ule is also proposed for point cloud geometry optimization
via point opacity estimation. In contrast to implicit render-
ing, our pipeline supports high-fidelity interactive editing
based on point cloud manipulation. Extensive experimental
results on NeRF-Synthetic, ScanNet, DTU and Tanks and
Temples datasets demonstrate the superior performances
achieved by our method in terms of PSNR, SSIM and LPIPS,
in comparison to the state-of-the-art. Code is released at
https://github.com/yizhangphd/FreqPCR.
| 1. Introduction
Photo-realistic rendering and editing of 3D representa-
tions is a key problem in 3D computer vision and graph-
ics with numerous applications, such as computer games,
*Equal contribution.
†Corresponding author: Bingbing Ni.VR/AR, and video creation. In particular, recently intro-
duced neural radiance field (NeRF) [21] has inspired some
follow-up works aiming to editable rendering [15, 17, 20,
47, 49, 55]. Due to the deeply coupled black-box net-
work, NeRF-based object-level editing usually requires a
pre-trained segmentation model to separate the objects to be
edited [17, 55]. Although some recent voxel-based variants
of NeRF [47,58] achieve multi-scene composition, they still
lack the ability to extract target objects from voxels.
In contrast to implicit rendering, point cloud render-
ing [1,6,13,18,33,36,57] is a promising editable rendering
model. On the one hand, explicit 3D representations are bet-
ter for interactive editing. On the other hand, the geometric
priors provided by point clouds can help us avoid massive
sampling in volume rendering methods, which can meet the
requirements of some real-time applications. As a class of
representative point cloud rendering methods, NPBG and
NPBG++ [1,33] achieve real-time rendering by using point-
wise features for encoding appearance information and an
U-Net [35] for decoding, respectively. However, the param-
eter quantity increases with the size of point cloud, which
may limit their application due to the excessive computa-
tional and memory complexity. Huang et al. [13] combine
point clouds with implicit rendering, where explicit point
clouds are used to estimate the geometry, and implicit radi-
ance mapping is used to predict view-dependent appearance
of surfaces. However, quantitative evaluation of its render-
ing results is significantly lower than that of implicit render-
ing methods such as NeRF [21], mainly due to the following
reasons: 1) the color of each viewing ray only depends on
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
119
a single surface point, thus without multiple sample color
aggregation for error attenuation, surface based rendering
techniques require radiance mapping to have a more precise
and expressive frequency encoding ability; and 2) defects
of the point cloud geometry reconstructed by MVSNet [56]
cause wrong surface estimation. To this end, we introduce
Adaptive Frequency Net (AFNet) to improve frequency ex-
pression ability of the radiance mapping and a preprocess-
ing module for point cloud geometry optimization.
Radiance mapping, also known as radiance field, is a
type of Implicit Neural Representation (INR). There have
been some studies [2, 7, 32, 41, 45, 46, 60] on the expressive
power and inductive bias of INRs. The standard Multi-layer
Perceptrons (MLPs) with ReLU activation function are well
known for the strong spectral bias towards reconstructing
low frequency signals [32]. Some recent works [7, 41, 46]
introduce strategies to enhance the high-frequency repre-
sentation of MLPs from a global perspective. However,
from a local perspective, the frequencies of a 3D scene are
region-dependent and most real objects are composed by
both weak textured regions and strong textured ones. Moti-
vated by this, we design a novel adaptive frequency modula-
tion mechanism based on HyperNetwork architecture [11],
which learns the local texture frequency and injects it into
adaptive frequency activation layers to modulate the im-
plicit radiance signal. The proposed mechanism can pre-
dict suitable frequency without frequency supervision and
modulate the radiance signal with adaptive frequency ba-
sis support to express more complex textures at negligible
computational overhead.
Previous surface point-based works [1, 13, 33] could not
optimize the point cloud geometry because they keep only
the closest point as a surface estimation for each ray. But if
we sample multiple points per ray during rendering, it will
greatly reduce the rendering speed. Therefore, we use the
volume rendering method as a preprocessing module to op-
timize the point cloud geometry. Specifically, we keep more
points in the pixel buffer and learn the opacity of each point
based on volume rendering. We find in our experiments that
for some poorly reconstructed scenes, point cloud geome-
try optimization can improve the rendering PSNR by 2-4dB
and avoid rendering artifacts.
For rigid object editing, we follow the deformation field
construction [28–31, 48] to render the edited point cloud.
Point cloud can be seen as a bridge between user editing
and deformation field to achieve interactive editing and ren-
dering. Users only need to edit the point cloud, and the
deformation field between the original 3D space and the de-
formed space is easy to obtain by querying the correspond-
ing transformations performed on point cloud. Moreover, to
avoid cross-scene training in multi-scene composition ap-
plication, we develop a masking strategy based on depth
buffer to combine multiple scenes in pixel level.We evaluate our method on NeRF-Synthetic [21], Scan-
Net [5], DTU [14] and Tanks and Temples [16] datasets
and comprehensively compare the proposed method with
other works in terms of performance (including PSNR,
SSIM and LPIPS), model complexity, rendering speed, and
editing ability. Our performance outperforms NeRF [21]
and all surface point-based rendering methods [1, 13, 33],
and is comparable to Compressible-composable NeRF (CC-
NeRF), i.e., the latest NeRF-based editable variant. We
achieve a real-time rendering speed of 39.27 FPS on NeRF-
Synthetic, which is 1700 ×faster than NeRF and 37 ×
faster than CCNeRF. We also reproduce the scene editing
of Object-NeRF [55] and CCNeRF [47] on ToyDesk [55]
and NeRF-Synthetic [21] dataset, respectively. As shown
in Fig. 1, we achieve real-time rendering with sharper de-
tails and user-friendly editing. The above results demon-
strate that our method is comprehensive in terms of both
rendering and editing and has great application potentials.
|
Xu_Unsupervised_3D_Shape_Reconstruction_by_Part_Retrieval_and_Assembly_CVPR_2023 | Abstract
Representing a 3D shape with a set of primitives can aid
perception of structure, improve robotic object manipula-
tion, and enable editing, stylization, and compression of
3D shapes. Existing methods either use simple paramet-
ric primitives or learn a generative shape space of parts.
Both have limitations: parametric primitives lead to coarse
approximations, while learned parts offer too little control
over the decomposition. We instead propose to decompose
shapes using a library of 3D parts provided by the user,
giving full control over the choice of parts. The library
can contain parts with high-quality geometry that are suit-
able for a given category, resulting in meaningful decom-
positions with clean geometry. The type of decomposition
can also be controlled through the choice of parts in the li-
brary. Our method works via a unsupervised approach that
iteratively retrieves parts from the library and refines their
placements. We show that this approach gives higher recon-
struction accuracy and more desirable decompositions than
existing approaches. Additionally, we show how the decom-
position can be controlled through the part library by using
different part libraries to reconstruct the same shapes.
| 1. Introduction
The ability to compactly represent a 3D shape as a com-
bination of primitive elements has applications in multiple
domains. In computer vision, the ability to automatically
decompose a shape into parts can aid machine perception
of the 3D structure of objects, which can in turn help au-tonomous agents plan how to manipulate such objects.
In computer graphics, a combination of primitives can
be used as a compressed geometry representation, as a way
to abstract, stylize, or edit a 3D shape by allowing users to
alter the underlying primitive library. Ideally, a system that
performs this kind of shape decomposition should be able
to do so without supervision in the form of ground-truth
decompositions, as such data is rarely available at scale.
Past research in vision and graphics has studied this
unsupervised shape decomposition problem. Initially, re-
searchers sought methods for decomposing 3D shapes into
sets of simple parametric primitives, such as cuboids or su-
perquadric surfaces [18,21,23,26]. These methods produce
clean, parametric geometry as output, and the choice of
primitive type allows a small degree of user control over the
decomposition. However, parametric primitives produce
only a coarse approximation of the input shape, which may
not be desirable in all applications. Thus, more recent work
has investigated unsupervised decomposition of shapes into
arbitrarily-shaped primitives whose geometries are deter-
mined by a neural network [4, 10, 17]. These methods pro-
duce a set of “neural primitives” whose union closely ap-
proximates the input shape. However, the geometries of
these primitives may contain artifacts (e.g. bumps, blobs).
Further, these methods offer little to no control over the type
of decomposition they produce – the network outputs what-
ever primitives it thinks are best to reconstruct the input
shape since it lacks access to a supervised part prior.
Is it possible to obtain a decomposition of a 3D shape
whose primitives exhibit clean geometry and closely recon-
struct the input shape, while also providing more control
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8559
over the type of decomposition produced? This is possible
if, rather than using simple parametric primitives or arbi-
trary neural primitives, one chooses a middle point between
these two extremes: reconstruct an input shape by retrieving
and assembling primitives from a library of pre-defined 3D
parts . This retrieve-and-assemble approach has several ad-
vantages. First, the parts in the library can be high-quality
3D meshes, guaranteeing clean geometry as output. Sec-
ond, a large part library can contain parts that are good
geometric matches for different regions of various shapes,
meaning that accurate reconstructions of input shapes are
possible. Finally, this approach offers a high degree of con-
trollability, as the user can change the part library to produce
different decompositions of the same input shape.
In this paper, we present a method for unsupervised de-
composition of 3D shapes using a user-defined library of
parts. Finding a subset of parts from a large part library
which best reconstructs an input shape is a large-scale com-
binatorial search problem. To make this problem tractable,
we represent the library of parts on a continuous manifold
by training a part autoencoder. This continuous representa-
tion of the part library allows us jointly optimize for the
identities and poses of parts which reconstruct the input
shape. To escape the many worst of local optimas in this
optimization landscape, the algorithm periodically uses its
current predicted set of parts to segment the input shape;
these segments are then re-encoded into the part feature
manifold to produce a new estimate of the parts that best
reconstruct the input shape. This data-driven, discontinu-
ous jump in the optimization state is similar to stages from
other non-gradient-based algorithms for global optimiza-
tion or latent variable estimation, including the mean shift
step from the mean shift algorithm and the E-step from ex-
pectation maximization [2].
Our algorithm can be run independently for any individ-
ual target shape, allowing it to work in a “zero-shot” set-
ting. When a larger dataset of related shapes is available, we
can also optimize for their part decomposition in advance (a
“training” phase) and then perform fast decomposition of a
new shape from that category by initializing its decomposi-
tion using its nearest neighbor from the “training” set.
We evaluate our algorithm by using it to reconstruct
shapes from point clouds, using parts from the PartNet
dataset. We compare to the recent Neural Parts unsuper-
vised decomposition system [17] and show that our al-
gorithm produces qualitatively more desirable decomposi-
tions that also achieve higher reconstruction accuracy. We
demonstrate the control offered by our method by show-
ing how it is possible to reconstruct shapes from one cat-
egory using parts from another (e.g. make a chair out of
lamp parts). This also has application for 3D graphics con-
tent creation, which we demonstrate by reconstructing tar-
get shapes using parts from a modular 3D asset library.In summary, our contribution is an unsupervised algo-
rithm which retrieves and poses 3D parts to reconstruct in-
put 3D shapes. We will release our code upon publication.
|
Yu_Graphics_Capsule_Learning_Hierarchical_3D_Face_Representations_From_2D_Images_CVPR_2023 | Abstract
The function of constructing the hierarchy of objects is
important to the visual process of the human brain. Previ-
ous studies have successfully adopted capsule networks to
decompose the digits and faces into parts in an unsuper-
vised manner to investigate the similar perception mecha-
nism of neural networks. However, their descriptions are
restricted to the 2D space, limiting their capacities to imi-
tate the intrinsic 3D perception ability of humans. In this
paper, we propose an Inverse Graphics Capsule Network
(IGC-Net) to learn the hierarchical 3D face representations
from large-scale unlabeled images. The core of IGC-Net is
a new type of capsule, named graphics capsule, which rep-
resents 3D primitives with interpretable parameters in com-
puter graphics (CG), including depth, albedo, and 3D pose.
Specifically, IGC-Net first decomposes the objects into a set
of semantic-consistent part-level descriptions and then as-
sembles them into object-level descriptions to build the hier-
archy. The learned graphics capsules reveal how the neural
networks, oriented at visual perception, understand faces as
a hierarchy of 3D models. Besides, the discovered parts can
be deployed to the unsupervised face segmentation task to
evaluate the semantic consistency of our method. Moreover,
the part-level descriptions with explicit physical meanings
provide insight into the face analysis that originally runs in
a black box, such as the importance of shape and texture
for face recognition. Experiments on CelebA, BP4D, and
Multi-PIE demonstrate the characteristics of our IGC-Net.
Corresponding author. | 1. Introduction
A path toward autonomous machine intelligence is to en-
able machines to have human-like perception and learning
abilities [19]. As humans, by only observing the objects,
we can easily decompose them into a set of part-level com-
ponents and construct their hierarchy even though we have
never seen these objects before. This phenomenon is sup-
ported by the psychological studies that the visual process
of the human brain is related to the construction of the hi-
erarchical structural descriptions [11,22,23,29]. To investi-
gate the similar perception mechanism of neural networks,
previous studies [18, 35] incorporate the capsule networks,
which are designed to present the hierarchy of objects and
describe each entity with interpretable parameters. After
observing a large-scale of unlabeled images, these meth-
ods successfully decompose the digits or faces into a set of
parts, which provide insight into how the neural networks
understand the objects. However, their representations are
limited in the 2D space. Specifically, these methods follow
the analysis-by-synthesis strategy in model training and try
to reconstruct the image by the decomposed parts. Since the
parts are represented by 2D templates, the reconstruction
becomes estimating the affine transformations to warp the
templates and put them in the right places, which is just like
painting with stickers. This strategy performs well when
the objects are intrinsically 2D, like handwritten digits and
frontal faces, but has difficulty in interpreting 3D objects in
the real world, especially when we want a view-independent
representation like humans [2].
How to represent the perceived objects is the core re-
search topic in computer vision [3, 25]. One of the most
popular theories is the Marr’s theory [22, 23]. He believed
that the purpose of the vision is to build the descriptions
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20981
of shapes and positions of things from the images and con-
struct hierarchical 3D representations of objects for recog-
nition. In this paper, we try to materialize Marr’s the-
ory on human faces and propose an Inverse Graphics Cap-
sule Network (IGC-Net), whose primitive is a new type
of capsule (i.e., graphics capsule) that is defined by com-
puter graphics (CG), to learn the hierarchical 3D represen-
tations from large-scale unlabeled images. Figure 1 shows
an overview of the proposed method. Specifically, the hi-
erarchy of the objects is described with the part capsules
and the object capsules, where each capsule contains a set
of interpretable parameters with explicit physical meanings,
including depth, albedo, and pose. During training, the in-
put image is first encoded to a global shape and albedo em-
beddings, which are sent to a decomposition module to get
the spatially-decoupled part-level graphics capsules. Then,
these capsules are decoded by a shared capsule decoder to
get explicit 3D descriptions of parts. Afterward, the parts
are assembled by their depth to generate the object capsules
as the object-centered representations, naturally construct-
ing the part-object hierarchy. Finally, the 3D objects em-
bedded in the object capsules are illuminated, posed, and
rendered to fit the input image, following the analysis-by-
synthesis manner. When an IGC-Net is well trained, the
learned graphics capsules naturally build hierarchical 3D
representations.
We apply IGC-Net to human faces, which have been
widely used to investigate human vision system [31] due
to the similar topology structures and complicated appear-
ances. Thanks to the capacity of the 3D descriptions, IGC-
Net successfully builds the hierarchy of in-the-wild faces
that are captured under various illuminations and poses. We
evaluate the IGC-Net performance on the unsupervised face
segmentation task, where the silhouettes of the discovered
parts are regarded as segment maps. We also incorporate
the IGC-Net into interpretable face analysis to uncover the
mechanism of neural networks when recognizing faces.
The main contributions of this paper are summarized as:
This paper proposes an Inverse Graphics Capsule Net-
work (IGC-Net) to learn the hierarchical 3D face repre-
sentations from unlabeled images. The learned graph-
ics capsules in the network provide insight into how
the neural networks, oriented at visual perception, un-
derstand faces as a hierarchy of 3D models.
A Graphics Decomposition Module (GDM) is pro-
posed for part-level decomposition, which incorpo-
rates shape and albedo information as cues to ensure
that each part capsule represents a semantically con-
sistent part of objects.
We execute the interpretable face analysis based on the
part-level 3D descriptions of graphics capsules. Be-
sides, the silhouettes of 3D parts are deployed to theunsupervised face segmentation task. Experiments on
CelebA, BP4D, and Multi-PIE show the effectiveness
of our method.
|
Yin_3D_GAN_Inversion_With_Facial_Symmetry_Prior_CVPR_2023 | Abstract
Recently, a surge of high-quality 3D-aware GANs have
been proposed, which leverage the generative power of neu-
ral rendering. It is natural to associate 3D GANs with GAN
inversion methods to project a real image into the gener-
ator’s latent space, allowing free-view consistent synthesis
and editing, referred as 3D GAN inversion. Although with
the facial prior preserved in pre-trained 3D GANs, recon-
structing a 3D portrait with only one monocular image is
still an ill-pose problem. The straightforward application
of 2D GAN inversion methods focuses on texture similarity
only while ignoring the correctness of 3D geometry shapes.
It may raise geometry collapse effects, especially when re-
constructing a side face under an extreme pose. Besides,
the synthetic results in novel views are prone to be blurry.
In this work, we propose a novel method to promote 3D
GAN inversion by introducing facial symmetry prior. We
design a pipeline and constraints to make full use of the
pseudo auxiliary view obtained via image flipping, which
helps obtain a view-consistent and well-structured geome-
try shape during the inversion process. To enhance texture
fidelity in unobserved viewpoints, pseudo labels from depth-
guided 3D warping can provide extra supervision. We de-
sign constraints to filter out conflict areas for optimization
in asymmetric situations. Comprehensive quantitative and
qualitative evaluations on image reconstruction and editing
demonstrate the superiority of our method.
| 1. Introduction
Recent 3D-aware generative adversarial networks (3D
GANs) have seen immense progress. By incorporating
a neural rendering engine into the generator network ar-
chitecture, 3D GANs can synthesize view-consistent im-
ages. To increase the generation resolution, existing meth-
ods [ 5,12,25,30,31,36±38,41] boost the 3D inductive bias
Work done during an internship at Tencent AI Lab.
²Corresponding Author.
Original
View ImageSource Image
Source Image
Novel
View ImageNovel
View ShapePTI Ours PTI OursFigure 1. Visual examples of our inversion method. Direct apply-
ing 2D GAN inversion methods (PTI [ 28]) to the 3D GAN suffers
from inaccurate geometry in novel views. Our method excels in
synthesizing consistent geometry and high-fidelity texture in dif-
ferent views, even reconstructing a face under an extreme pose.
with an additional 2D CNN-based upsampler or an efficient
3D representation modeling method. With tremendous ef-
fort, 3D GANs can produce photorealistic images while en-
forcing strong 3D consistency across different views.
We are interested in the task of reconstructing a human
face with 3D geometry and texture given only one monocu-
lar image. It is an ill-posed problem and close to the harsh
condition of real scenarios. With the power of 3D GANs,
it seems achievable via projecting a target image onto the
manifold of a pre-trained generator. The process is referred
as 3D GAN inversion. A straightforward path is to follow
the 2D GAN inversion method [ 28],i.e.,optimizing the la-
tent code and the network parameters of the generator to
overfit the specific portrait.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
342
However, since the ground truth 3D geometry is absent
given one monocular image, the inversion result is far from
satisfactory. The process of fitting a 3D GAN to one im-
age would sacrifice geometric correctness in order to make
the synthetic texture as close as possible to the input, even
destroying the original semantic-rich latent space. As the
optimization process goes, the face geometry tends to de-
generate into a flattened shape, due to the absence of geom-
etry supervision, e.g., images from other views. Besides,
there exist quality issues in texture synthesis under novel
views. The rendered images of unseen views tend to be
blurry and inconsistent with the original image, especially
when reconstructing a side face under an extreme pose. Be-
cause there is no texture supervision for unseen views given
only one monocular image. The failure cases of directly
applying [ 28] are illustrated in Fig. 1.
In this work, to alleviate the issue caused by missing ge-
ometry and texture supervision under multiple views, we
propose a novel 3D GAN inversion approach by taking full
advantage of facial symmetry prior to construct pseudo su-
pervision of different views. Intuitively, we note that human
faces are almost symmetric. Assuming the given portrait is
symmetric, we can obtain an additional perspective of the
portrait by simply mirroring the image. The images of two
distinct views can provide geometric relations between the
3D points and their 2D projections based on epipolar geom-
etry. Motivated by this, we seek to leverage facial symmetry
as the geometric prior constraining the inversion. The sym-
metry prior is also employed in a traditional 3D reconstruc-
tion work [ 35]. We leverage the mirrored image as extra
supervision of another view when performing the inversion,
which prevents the geometry collapse. A rough geometry
can be obtained by the inversion with the original and mir-
ror images.
To further enhance texture quality and geometry in novel
views, we employ depth-guided 3D warping to generate the
pseudo images of the views surrounding the input and sym-
metric camera pose. The depth is inferred from the rough
3D volume. The original image along with the pseudo im-
ages are used to fine-tune the generator’s parameters for the
joint promotion of texture and geometry. To prevent the op-
timized geometry from deviating too much from the rough
geometry, we design a geometry regularization term as a
constraint. However, human faces are never fully symmet-
ric in practice, neither in shape nor appearance. Therefore,
we design several constraints to extract meaningful infor-
mation adaptively from the mirror image without compro-
mising the original reconstruction quality.
Our main contributions are as follows:
• We propose a novel 3D GAN inversion method by in-
corporating facial symmetry prior. It enables a high-
quality reconstruction while preserving the multi-view
consistency in geometry and texture.• We conduct comprehensive experiments to demon-
strate the effectiveness of our method and compare it
with many state-of-the-art inversion methods. We also
apply our method to various downstream applications.
|
Yang_PVT-SSD_Single-Stage_3D_Object_Detector_With_Point-Voxel_Transformer_CVPR_2023 | Abstract
Recent Transformer-based 3D object detectors learn
point cloud features either from point- or voxel-based rep-
resentations. However, the former requires time-consuming
sampling while the latter introduces quantization errors.
In this paper, we present a novel Point-Voxel Transformer
for single-stage 3D detection (PVT-SSD) that takes advan-
tage of these two representations. Specifically, we first use
voxel-based sparse convolutions for efficient feature encod-
ing. Then, we propose a Point-Voxel Transformer (PVT)
module that obtains long-range contexts in a cheap manner
from voxels while attaining accurate positions from points.
The key to associating the two different representations is
our introduced input-dependent Query Initialization mod-
ule, which could efficiently generate reference points and
content queries. Then, PVT adaptively fuses long-range
contextual and local geometric information around refer-
ence points into content queries. Further, to quickly find
the neighboring points of reference points, we design the
Virtual Range Image module, which generalizes the native
range image to multi-sensor and multi-frame. The experi-
ments on several autonomous driving benchmarks verify the
effectiveness and efficiency of the proposed method. Code
will be available.
| 1. Introduction
3D object detection from point clouds has become in-
creasingly popular thanks to its wide applications, e.g.,
autonomous driving and virtual reality. To process un-
ordered point clouds, Transformer [51] has recently at-
tracted great interest as the self-attention is invariant to
the permutation of inputs. However, due to the quadratic
∗This work was done when Honghui was an intern at Shanghai Arti-
ficial Intelligence Laboratory.
†Corresponding authorcomplexity of self-attention, it involves extensive compu-
tation and memory budgets when processing large point
clouds. To overcome this problem, some point-based meth-
ods [29, 36, 37] perform attention on downsampled point
sets, while some voxel-based methods [10, 33, 64] employ
attention on local non-empty voxels. Nevertheless, the for-
mer requires farthest point sampling (FPS) [41] to sam-
ple point clouds, which is time-consuming on large-scale
outdoor scenes [19], while the latter inevitably introduces
quantization errors during voxelization, which loses accu-
rate position information.
In this paper, we propose PVT-SSD that absorbs the ad-
vantages of the above two representations, i.e., voxels and
points, while overcoming their drawbacks. To this end, in-
stead of sampling points directly, we convert points to a
small number of voxels through sparse convolutions and
sample non-empty voxels to reduce the runtime of FPS.
Then, inside the PVT-SSD, voxel features are adaptively
fused with point features to make up for the quantization er-
ror. In this way, both long-range contexts from voxels and
accurate positions from points are preserved. Specifically,
PVT-SSD consists of the following components:
Firstly, we propose an input-dependent Query Initial-
ization module inspired by previous indoor Transformer-
based detectors [29, 36], which provides queries with bet-
ter initial positions and instance-related features. Un-
like [29, 36], our queries originate from non-empty voxels
instead of points to reduce the sampling time. Concretely,
with the 3D voxels generated by sparse convolutions, we
firstcollapse 3D voxels into 2D voxels by merging voxels
along the height dimension to further reduce the number of
voxels. The sample operation is then applied to select a rep-
resentative set of voxels. We finally liftsampled 2D voxels
to generate 3D reference points. Subsequently, the corre-
sponding content queries are obtained in an efficient way
by projecting reference points onto a BEV feature map and
indexing features at the projected locations.
Secondly, we introduce a Point-Voxel Transformer
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
13476
module that captures long-range contextual features from
voxel tokens and extracts fine-grained point features from
point tokens. To be specific, the voxel tokens are obtained
from non-empty voxels around reference points to cover a
large attention range. In contrast, the point tokens are gener-
ated from neighboring points near reference points to retain
fine-grained information. These two different tokens are
adaptively fused by the cross-attention layer based on the
similarity with content queries to complement each other.
Furthermore, we design a Virtual Range Image mod-
ule to accelerate the neighbor querying process in the point-
voxel Transformer. With the constructed range image, refer-
ence points can quickly find their neighbors based on range
image coordinates. Unlike the native range image captured
by LiDAR sensors, we can handle situations where multiple
points overlap on the same pixel in the range image. There-
fore, it can be used for complex scenarios, such as multiple
sensors and multi-frame fusion.
Extensive experiments have been conducted on several
detection benchmarks to verify the efficacy and efficiency
of our approach. PVT-SSD achieves competitive results on
KITTI [13], Waymo Open Dataset [48], and nuScenes [3].
|
Zeng_SceneComposer_Any-Level_Semantic_Image_Synthesis_CVPR_2023 | Abstract
We propose a new framework for conditional image syn-
thesis from semantic layouts of any precision levels, ranging
from pure text to a 2D semantic canvas with precise shapes.
More specifically, the input layout consists of one or more
semantic regions with free-form text descriptions and ad-
justable precision levels, which can be set based on the
desired controllability. The framework naturally reduces to
text-to-image (T2I) at the lowest level with no shape informa-
tion, and it becomes segmentation-to-image (S2I) at the high-
est level. By supporting the levels in-between, our framework
is flexible in assisting users of different drawing expertise
and at different stages of their creative workflow. We intro-
duce several novel techniques to address the challenges com-
ing with this new setup, including a pipeline for collecting
training data; a precision-encoded mask pyramid and a text
feature map representation to jointly encode precision level,
semantics, and composition information; and a multi-scale
guided diffusion model to synthesize images. To evaluatethe proposed method, we collect a test dataset containing
user-drawn layouts with diverse scenes and styles. Experi-
mental results show that the proposed method can generate
high-quality images following the layout at given precision,
and compares favorably against existing methods. Project
pagehttps://zengxianyu.github.io/scenec/
| 1. Introduction
Recently, deep generative models such as StyleGAN
[24, 25] and diffusion models [9, 19, 49] have made a signif-
icant breakthrough in generating high-quality images. Im-
age generation and editing technologies enabled by these
models have become highly appealing to artists and design-
ers by helping their creative workflows. To make image
generation more controllable, researchers have put a lot
of effort into conditional image synthesis and introduced
models using various types and levels of semantic input
such as object categories, text prompts, and segmentation
1
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22468
Table 1. Difference from related conditional image synthesis works.
T2I: text to image, S2I: segmentation to image, ST2I: Scene-based
text to image [13], Box2I: bounding box layout to image [50].
Setting Open-domain layout Shape control Sparse layout Coarse shape Level control
T2I ✓ ✗ ✗ ✗ ✗
S2I ✗ ✓ ✗ ✗ ✗
ST2I ✗ ✓ ✗ ✗ ✗
Box2I ✗ ✗ ✓ ✗ ✗
Ours ✓ ✓ ✓ ✓ ✓
maps etc. [23, 35, 36, 43, 44, 67].
However, existing models are not flexible enough to sup-
port the full creative workflow. They mostly consider fixed-
level semantics as the input [63], e.g. image-level text de-
scriptions in text-to-image generation (T2I) [35, 39, 41, 43,
44], or pixel-level segmentation maps in segmentation-to-
image generation (S2I) [23,36,67]. Recent breakthroughs on
T2I such as DALLE2 [39] and StableDiffusion [1,43] demon-
strate extraordinary capabilities of generating high-quality
results. They can convert a rough idea into visual messages
to provide inspirations at the beginning of the creative pro-
cess, but provide no further control over image composition.
On the other hand, S2I allows users to precisely control the
image composition. As it is extremely challenging to draw
a detailed layout directly, S2I is more useful for later cre-
ative stages given initial designs. For real-world use cases,
it is highly desirable to have a model which can generate
images from not only pure text or segmentation maps, but
also intermediate-level layouts with coarse shapes.
To this end, we propose a new unified conditional image
synthesis framework to generate images from a semantic lay-
out at any combination of precision levels. It is inspired by
the typical coarse-to-fine workflow of artists and designers:
they first start from an idea, which can be expressed as a text
prompt or a set of concepts (Fig. 1 (a)), then tend to draw
the approximate outlines and refine each object (Fig. 1 (a)-
(d)). More specifically, we model a semantic layout as a set
of semantic regions with free-form text descriptions. The
layout can be sparse and each region can have a precision
level to control how well the generated object should fit to
the specified shape. The framework reduces to T2I when the
layout is the coarsest (Fig. 1 (a)), and it becomes S2I when
the layout is a segmentation map (Fig. 1 (d)). By adjusting
the precision level, users can achieve their desired controlla-
bility (Fig. 1 (a)-(d)). This framework is different from the
existing works in many aspects, as summarized in Table 1.
This new setup comes with several challenges. First, it
is non-trivial to encode open-domain layouts in image syn-
thesis frameworks. Second, to handle hand-drawn layouts
of varying precision, we need an effective and robust way
to inject the precision information into the layout encod-
ing. Third, there is no large-scale open-domain layout/image
dataset. To generate high-quality images and generalize to
novel concepts, a large and diverse training dataset is crucial.
We introduce several novel ideas to address these chal-lenges. First, we propose a text feature map representation
for encoding a semantic layout. It can be seen as a spatial ex-
tension of text embedding or generalization of segmentation
masks from binary to continuous space. Second, we intro-
duce a precision-encoded mask pyramid to model layout
precision. Inspired by the classical image pyramid mod-
els [2, 6, 47, 62], we relate shape precision to levels in a
pyramid representation and encode precision by dropping
out regions of lower precision levels. In other words, the l-th
level of the mask pyramid is a sub-layout (subset of regions)
consisting of semantic regions with precision level no less
thanl. By creating a text feature map for each sub-layout, we
obtain a text feature pyramid as a unified representation of
semantics, composition, and precision. Finally, we feed the
text feature pyramid to a multi-scale guided diffusion model
to generate images. We fulfill the need for training data by
collecting them from two sources: (1) large-scale image-
text pairs; (2) a relatively small pseudo layout/image dataset
using text-based object detection and segmentation. With
this multi-source training strategy, both text-to-image and
layout-to-image can benefit from each other synergistically.
Our contributions are summarized as follows:
•A unified framework for diffusion-based image syn-
thesis from semantic layouts with any combination of
precision control.
•Novel ideas to build the model, including precision-
encoded mask pyramid and pyramid text feature map
representation, and multi-scale guided diffusion model,
and training with multi-source data.
•A new real-world user-drawn layout dataset and ex-
tensive experiments showing the effectiveness of our
model for text-to-image and layout-to-image generation
with precision control.
|
Zhang_Hyperspherical_Embedding_for_Point_Cloud_Completion_CVPR_2023 | Abstract
Most real-world 3D measurements from depth sensors
are incomplete, and to address this issue the point cloud
completion task aims to predict the complete shapes of
objects from partial observations. Previous works often
adapt an encoder-decoder architecture, where the encoder
is trained to extract embeddings that are used as inputs
to generate predictions from the decoder. However, the
learned embeddings have sparse distribution in the feature
space, which leads to worse generalization results during
testing. To address these problems, this paper proposes
a hyperspherical module, which transforms and normal-
izes embeddings from the encoder to be on a unit hyper-
sphere. With the proposed module, the magnitude and direc-
tion of the output hyperspherical embedding are decoupled
and only the directional information is optimized. We the-
oretically analyze the hyperspherical embedding and show
that it enables more stable training with a wider range of
learning rates and more compact embedding distributions.
Experiment results show consistent improvement of point
cloud completion in both single-task and multi-task learn-
ing, which demonstrates the effectiveness of the proposed
method.
| 1. Introduction
The continual improvement of 3D sensors has made
point clouds much more accessible, which drives the de-
velopment of algorithms to analyze them. Thanks to deep
learning techniques, state of the art algorithms for point
cloud analysis have achieved incredible performance [9,20–
22,25] by effectively learning representations from large 3D
datasets [3, 6, 26] and have many applications in robotics,
autonomous driving, and 3D modeling. However, point
clouds in the real-world are often incomplete and sparse
due to many reasons, such as occlusions, low resolution,
and the limited view of 3D sensors. So it is critical to have
an algorithm that is capable of predicting complete shapes
of objects from partial observations.
Given the importance of point cloud completion, it is
Unconstrained
Embedding
Encoder Completion
Decoder
Other Decoder
Hyperspherical
Embedding Completion
Decoder
Other Decoder MLP Norm Hyper Module
Encoder 00.51.0
00.51.0Figure 1. An illustration of the architecture proposed in this paper. The up-
per subfigure shows the general point cloud analysis structure, where the
embedding is directly output from the encoder without constraints. The
lower subfigure shows the structure of the model with the proposed hy-
perspherical module. The figures under the embeddings illustrate the co-
sine similarity distribution between embeddings, which indicates a more
compact embedding distribution achieved by the proposed method and im-
proves point cloud completion.
unsurprising that various methods have been proposed to
address this challenge [18, 27, 31, 34, 37, 41]. Most exist-
ing methods adapt encoder-decoder structures, in which the
encoder takes a partial point cloud as input and outputs
an embedding vector, and then it is taken by the decoder
which predicts a complete point cloud. The embedding
space is designed to be high-dimensional as it must have
large enough capacity to contain all information needed for
downstream tasks. However, the learned high-dimensional
embeddings, as shown in this paper, tend to have a sparse
distribution in the embedding space, which increases the
possibility that unseen features at testing are not captured
by the representation learned at training and leads to worse
generalizability of models.
Usually, one real-world application requires predictions
from multiple different tasks. For example, to grasp an
object in space the robot arm would need the informa-
tion about the shape, category, and orientation of the tar-
get object. In contrast to training all tasks individually from
scratch, a more numerically efficient approach would be to
train all relevant tasks jointly by sharing parts of networks
between different tasks [11,13,28]. However, existing point
cloud completion methods lack the analysis of accomplish-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5323
ing point cloud completion jointly with other tasks. We
show that training existing point cloud completion methods
with other semantic tasks together leads to worse perfor-
mance when compared to learning each individually.
To address the above limitations, this paper proposes a
hyperspherical module which outputs hyperspherical em-
beddings for point cloud completion. The proposed hy-
perspherical module can be integrated into existing ap-
proaches with encoder-decoder structures as shown in Fig-
ure 1. Specifically, the hyperspherical module transforms
and constrains the output embedding onto the surface of
a hypersphere by normalizing the embedding’s magnitude
to unit, so only the directional information is kept for later
use. We theoretically investigate the effects of hyperspher-
ical embeddings and show that it improves the point cloud
completion models by more stable training with large learn-
ing rate and more generalizability by learning more com-
pact embedding distributions. We also demonstrate the
proposed hyperspherical embedding in multi-task learning,
where it helps reconcile the learning conflicts between point
cloud completion and other semantic tasks at training. The
reported improvements of the existing state-of-the-art ap-
proaches on several public datasets illustrate the effective-
ness of the proposed method. The main contributions of this
paper are summarized as follows:
• We propose a hyperspherical module that outputs hy-
perspherical embeddings, which improves the perfor-
mance of point cloud completion.
• We theoretically investigate the effects of hyperspher-
ical embeddings and demonstrate that the point cloud
completion benefits from them by stable training and
learning a compact embedding distribution.
• We analyze training point cloud completion with other
tasks and observe conflicts between them, which can
be reconciled by the hyperspherical embedding.
|
Yi_MIME_Human-Aware_3D_Scene_Generation_CVPR_2023 | Abstract
Generating realistic 3D worlds occupied by moving hu-
mans has many applications in games, architecture, and
synthetic data creation. But generating such scenes is ex-
pensive and labor intensive. Recent work generates human
poses and motions given a 3D scene. Here, we take the
opposite approach and generate 3D indoor scenes given
3D human motion. Such motions can come from archival
motion capture or from IMU sensors worn on the body, effec-
tively turning human movement into a “scanner” of the 3D
world. Intuitively, human movement indicates the free-space
in a room and human contact indicates surfaces or objects
that support activities such as sitting, lying or touching. We
propose MIME (Mining Interaction and Movement to infer
3D Environments), which is a generative model of indoor
scenes that produces furniture layouts that are consistent
with the human movement. MIME uses an auto-regressive
transformer architecture that takes the already generated
objects in the scene as well as the human motion as input,
and outputs the next plausible object. To train MIME, we
build a dataset by populating the 3D FRONT scene dataset
with 3D humans. Our experiments show that MIME pro-
duces more diverse and plausible 3D scenes than a recent
generative scene method that does not know about human
movement. Code and data are available for research at
https://mime.is.tue.mpg.de .
*This work was performed when C.P. H. was at the MPI-IS. | 1. Introduction
Humans constantly interact with their environment. They
walk through a room, touch objects, rest on a chair, or sleep
in a bed. All these interactions contain information about
the scene layout and object placement. In fact, a mime is a
performer who uses our understanding of such interactions
to convey a rich, imaginary, 3D world using only their body
motion. Can we train a computer to take human motion
and, similarly, conjure the 3D scene in which it belongs?
Such a method would have many applications in synthetic
data generation, architecture, games, and virtual reality. For
example, there exist large datasets of 3D human motion
like AMASS [ 38] and such data rarely contains information
about the 3D scene in which it was captured. Could we
take AMASS and generate plausible 3D scenes for all the
motions? If so, we could use AMASS to generate training
data containing realistic human-scene interaction.
To answer such questions, we train a new method called
MIME (Mining Interaction and Movement to infer 3D Envi-
ronments) that generates plausible indoor 3D scenes based
on 3D human motion. Why is this possible? The key in-
tuitions are that (1) A human’s motion through free space
indicates the lack of objects, effectively carving out regions
of the scene that are free of furniture. And (2), when they
are in contact with the scene, this constrains both the type
and placement of 3D objects; e.g., a sitting human must be
sitting on something, such as a chair, a sofa, a bed, etc.
To make these intuitions concrete, we develop MIME,
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
12965
which is a transformer-based auto-regressive 3D scene gen-
eration method that, given an empty floor plan and a human
motion sequence, predicts the furniture that is in contact
with the human. It also predicts plausible objects that have
no contact with the human but that fit with the other objects
and respect the free-space constraints induced by the human
motion. To condition the 3D scene generation with human
motion, we estimate possible contact poses using POSA [ 23]
and divide the motion in contact and non-contact snippets
(Fig. 2). The non-contact poses define free-space in the room,
which we encode as 2D floor maps, by projecting the foot
vertices onto the ground plane. The contact poses and cor-
responding 3D human body models are represented by 3D
bounding boxes of the contact vertices predicted by POSA.
We use this information as input to the transformer and auto-
regressively predict the objects that fulfill the contact and
free-space constraints; see Fig. 1.
To train MIME, we built a new dataset called 3D-FRONT
HUMAN that extends the large-scale synthetic scene dataset
3D-FRONT [ 18]. Specifically, we automatically populate
the 3D scenes with humans; i.e., non-contact humans (a
sequence of walking motion and standing humans) as well
as contact humans (sitting, touching, and lying humans). To
this end, we leverage motion sequences from AMASS [38],
as well as static contact poses from RenderPeople [ 47] scans.
At inference time, MIME generates a plausible 3D scene
layout for the input motion, represented as 3D bounding
boxes. Based on this layout, we select 3D models from the
3D-FUTURE dataset [ 19] and refine their 3D placement
based on geometric constraints between the human poses
and the scene.
In comparison to pure 3D scene generation approaches
like ATISS [ 46], our method generates a 3D scene that sup-
ports human contact and motion while putting plausible ob-
jects in free space. In contrast to Pose2Room [ 43] which is
a recent pose-conditioned generative model, our method en-
ables the generation of objects that are not in contact with the
human, thus, predicting the entire scene instead of isolated
objects. We demonstrate that our method can directly be
applied to real captured motion sequences such as PROX-D
[22] without finetuning .
In summary, we make the following contributions:
•a novel motion-conditioned generative model for 3D
room scenes that auto-regressively generates objects
that are in contact with the human and do not occupy
free-space defined by the motion.
•a new 3D scene dataset with interacting humans and
free space humans which is constructed by populating
3D FRONT with static contact/standing poses from
RenderPeople and motion data of AMASS.
Figure 2. We divide input humans into two parts: contact humans
and free-space humans. We extract the 3D bounding boxes for
each contact human, and use non-maximum suppression on the
3D IoU to aggregate multiple humans in the same 3D space into
a single contact 3D bounding box (orange boxes). We project the
foot vertices of free-space humans on the floor plane, to get the 2D
free-space mask (dark blue).
|
Zhang_Starting_From_Non-Parametric_Networks_for_3D_Point_Cloud_Analysis_CVPR_2023 | Abstract
We present a Non-parametric Network for 3D point
cloud analysis, Point-NN , which consists of purely non-
learnable components: farthest point sampling (FPS), k-
nearest neighbors ( k-NN), and pooling operations, with
trigonometric functions. Surprisingly, it performs well on
various 3D tasks, requiring no parameters or training,
and even surpasses existing fully trained models. Start-
ing from this basic non-parametric model, we propose two
extensions. First, Point-NN can serve as a base archi-
tectural framework to construct Parametric Networks by
simply inserting linear layers on top. Given the supe-
rior non-parametric foundation, the derived Point-PN ex-
hibits a high performance-efficiency trade-off with only a
few learnable parameters. Second, Point-NN can be re-
garded as a plug-and-play module for the already trained
3D models during inference. Point-NN captures the comple-
mentary geometric knowledge and enhances existing meth-
ods for different 3D benchmarks without re-training. We
hope our work may cast a light on the community for un-
derstanding 3D point clouds with non-parametric meth-
ods. Code is available at https://github.com/
ZrrSkywalker/Point-NN .
| 1. Introduction
Point cloud 3D data processing is a foundational opera-
tion in autonomous driving [4, 12, 21], scene understand-
ing [1, 3, 33, 44], and robotics [5, 20, 26]. Point clouds
contain unordered points discretely depicting object sur-
faces in 3D space. Unlike grid-based 2D images, they
are distribution-irregular and permutation-invariant, which
leads to non-trivial challenges for algorithm designs.
Since PointNet++ [23], the prevailing trend has been
†Corresponding author
Non-ParametricEncoderPoint-MemoryBank+No Parameter
Non-Parametric Components
FPS
k-NNPooling
TrigonometricFunctions
ClassificationFew-Shot Cls. Segmentation Detection81.8% Acc.90.9%Acc.70.4%mIoU33.3%AP!"
NoTraining
Figure 1. The Pipeline of Non-Parametric Networks. Point-NN
is constructed by the basic non-parametric components without
any learnable operators. Free from training, Point-NN can achieve
favorable performance on various 3D tasks.
adding advanced local operators and scaled-up learnable pa-
rameters. Instead of max pooling for feature aggregation,
mechanisms are proposed to extract local geometries, e.g.,
adaptive point convolutions [14, 16, 30, 37, 38] and graph-
like message passing [11, 34, 43]. The performance gain
also rises from scaling up the number of parameters, e.g.,
KPConv [30]’s 14.3M and PointMLP [17]’s 12.6M, is much
larger than PointNet++’s 1.7M. This trend has increased
network complexity and computational resources.
Instead, the non-parametric framework underlying all
the learnable modules remains nearly the same since Point-
Net++, including farthest point sampling (FPS), k-Nearest
Neighbors ( k-NN), and pooling operations. Given that few
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
5344
works have investigated their efficacy, we ask the question:
can we achieve high 3D point cloud analysis performance
using only these non-parametric components?
We present a Non-parametric Network, termed Point-
NN, which is constructed by the aforementioned non-
learnable components. Point-NN, as shown in Figure 1,
consists of a non-parametric encoder for 3D feature ex-
traction and a point-memory bank for task-specific recogni-
tion. The multi-stage encoder applies FPS, k-NN, trigono-
metric functions, and pooling operations to progressively
aggregate local geometries, producing a high-dimensional
global vector for the point cloud. We only adopt simple
trigonometric functions to reveal local spatial patterns at
every pooling stage without learnable operators. Then, we
adopt the non-parametric encoder of Point-NN to extract the
training-set features and cache them as the point-memory
bank. For a test point cloud, the bank outputs task-specific
predictions via naive feature similarity matching, which val-
idates the encoder’s discrimination ability.
Free from an |
Yang_GD-MAE_Generative_Decoder_for_MAE_Pre-Training_on_LiDAR_Point_Clouds_CVPR_2023 | Abstract
Despite the tremendous progress of Masked Autoen-
coders (MAE) in developing vision tasks such as image and
video, exploring MAE in large-scale 3D point clouds re-
mains challenging due to the inherent irregularity. In con-
trast to previous 3D MAE frameworks, which either design
a complex decoder to infer masked information from main-
tained regions or adopt sophisticated masking strategies,
we instead propose a much simpler paradigm. The core
idea is to apply a Generative Decoder for MAE (GD-MAE)
to automatically merges the surrounding context to restore
the masked geometric knowledge in a hierarchical fusion
manner. In doing so, our approach is free from introducing
the heuristic design of decoders and enjoys the flexibility
of exploring various masking strategies. The correspond-
ing part costs less than 12% latency compared with con-
ventional methods, while achieving better performance. We
demonstrate the efficacy of the proposed method on several
large-scale benchmarks: Waymo, KITTI, and ONCE. Con-
sistent improvement on downstream detection tasks illus-
trates strong robustness and generalization capability. Not
only our method reveals state-of-the-art results, but remark-
ably, we achieve comparable accuracy even with 20% of the
labeled data on the Waymo dataset. Code will be released.
| 1. Introduction
We have witnessed great success in 3D object detec-
tion [44, 47, 64, 68, 71, 78], due to the numerous applica-
tions in autonomous driving, robotics, and navigation. De-
spite the impressive performance, most methods count on
large amounts of carefully labeled 3D data, which is often
∗Equal contribution. This work was done when Honghui was an intern
at Shanghai Artificial Intelligence Laboratory.
†Corresponding author
…
…
…Head
…
…
…HeadTransformer Encoder Transformer Decoder
(a) MAE, Point-MAE, etc.
Transformer EncoderTransformer Decoder
(b) ConvMAE, Point-M2AE, etc.
…Head
Transformer Encoder
(c) GD-MAE (Ours)Generative DecoderFigure 1. Comparisons. Previous MAE-style pre-training archi-
tectures of (a) single-scale [18, 19, 38] and (b) multi-scale [12, 73]
take as inputs the visible tokens and learnable tokens for decoders.
In contrast, (c) the proposed framework avoids such a process.
of high cost and time-consuming. Such a fully supervised
manner hinders the possibility of using massive unlabeled
data and can be vulnerable when applied in different scenes.
Mask Autoencoder (MAE) [18], serving as one of the ef-
fective ways for pre-training, has demonstrated great po-
tential in learning holistic representations. This is achieved
by encouraging the method to learn a semantically consis-
tent understanding of the input beyond low-level statistics.
Although MAE-based methods have shown effectiveness in
2D image [18] and video [52], how to apply it in large-scale
point clouds remains an open problem.
Due to the large variation of the visible extent of ob-
jects, learning hierarchical representation is of great signif-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
9403
icance in 3D supervised learning [40, 46, 62]. To enable
MAE-style pre-training on the hierarchical structure, previ-
ous approaches [12, 73] introduce either complex decoders
or elaborate masking strategies to learn robust latent repre-
sentations. For example, ConvMAE [12] adopts a block-
wise masking strategy that first obtains a mask for the late
stage of the encoder and then progressively upsamples the
mask to larger resolutions in early stages to maintain mask-
ing consistency. Point-M2AE [73] proposes a hierarchi-
cal decoder to gradually incorporate low-level features into
learnable tokens for reconstruction. Meanwhile, it needs a
multi-scale masking strategy that backtracks unmasked po-
sitions to all preceding scales to ensure coherent visible re-
gions and avoid information leakage. The minimum size of
masking granularity is highly correlated to output tokens of
the last stage, which inevitably poses new challenges, espe-
cially to objects with small sizes, e.g., pedestrians.
To alleviate the issue, we present a much simpler
paradigm dubbed GD-MAE for pre-training, as shown in
Figure 1. The key is to use a generative decoder to automat-
ically expand the visible regions to the underlying masked
area. In doing so, it eliminates the need for designing
complex decoders, in which masked regions are presented
as learnable tokens. It also allows for the unification of
multi-scale features into the same scale, thus enabling flex-
ible masking strategies, e.g., point- and patch-wise mask-
ing, while avoiding intricate operations such as backtrack-
ing in [12, 73] to keep masking consistency. Specifically, it
consists of the following components:
Firstly, we propose the Sparse Pyramid Transformer
(SPT) as the multi-scale encoder. Following [9,22,43], SPT
takes pillars as input due to the compact and regular repre-
sentation. Unlike PointPillars [22] that uses traditional con-
volutions for feature extraction, we use the sparse convo-
lution [62] to downsample the tokens and the sparse trans-
former [9] to enlarge the receptive field of the visible tokens
when deploying extensive masking.
Secondly, we introduce the Generative Decoder (GD) to
simplify MAE-style pre-training on multi-scale backbones.
GD consists of a series of transposed convolutions used to
upsample multi-scale features and a convolution utilized to
expand the visible area, as shown in Figure 2. The expanded
features are then directly indexed according to the coordi-
nates of the masked tokens for the geometric reconstruction.
Extensive experiments have been conducted on Waymo
Open Dataset [49], KITTI [13], and ONCE [33] to ver-
ify the efficacy. On the Waymo dataset, GD-MAE sets
new state-of-the-art detection results compared to previ-
ously published methods.
Our contributions are summarized as follows:
• We introduce a simpler MAE framework that avoids
complex decoders and thus simplifies pre-training.
…
Masking Multi-Scale Encoder DecoderFigure 2. Illustration of area expansion. The input point cloud
(i.e., the orange curve) is voxelized and fed into the multi-scale
encoder. The generative decoder can automatically expand visible
features to potentially masked areas.
• The proposed decoder enables flexible masking strate-
gies on LiDAR point clouds, while costing less than
12% latency compared with conventional methods.
• Extensive experiments are conducted to verify the ef-
fectiveness of the proposed model.
|
Zhang_Layout-Based_Causal_Inference_for_Object_Navigation_CVPR_2023 | Abstract
Previous works for ObjectNav task attempt to learn the
association (e.g. relation graph) between the visual inputs
and the goal during training. Such association contains
the prior knowledge of navigating in training environments,
which is denoted as the experience. The experience per-
forms a positive effect on helping the agent infer the likely
location of the goal when the layout gap between the un-
seen environments of the test and the prior knowledge ob-
tained in training is minor. However, when the layout gap is
significant, the experience exerts a negative effect on nav-
igation. Motivated by keeping the positive effect and re-
moving the negative effect of the experience, we propose
the layout-based soft Total Direct Effect (L-sTDE) frame-
work based on the causal inference to adjust the predic-
tion of the navigation policy. In particular, we propose to
calculate the layout gap which is defined as the KL diver-
gence between the posterior and the prior distribution of
the object layout. Then the sTDE is proposed to appropri-
ately control the effect of the experience based on the lay-
out gap. Experimental results on AI2THOR, RoboTHOR,
and Habitat demonstrate the effectiveness of our method.
The code is available at https://github.com/sx-
zhang/Layout-based-sTDE.git .
| 1. Introduction
The visual object-oriented navigation task (i.e. Object-
Nav) [3] requires the agent to navigate to a user-specified
goal (e.g. laptop) based on the egocentric visual observa-
tions. A typical challenge is navigating in unseen environ-
ments, where the goal is invisible most of the time, i.e. the
partial observable problem, which typically results in the
agent’s meaningless actions (e.g. back-tracking or getting
lost at dead-ends). Although encouraging the exploration in
the unseen environment (until the goal is visible) is an in-
tuitive solution, the lack of environment layout information
still limits the efficiency of goal-oriented navigation.
: ObservationS(a) The factual prediction(b) The counterfactual prediction : GoalG : ActionA : ExperienceZZGS¯S¯GZ¯ASGZAInterventionInterventionCounterfactualFigure 1. The proposed causal graph. (a) represents the fact pre-
diction a, i.e. the original prediction of the trained model. (b)
refers to the counter-fact prediction ¯a, i.e. the prediction is only
affected by the experience Z. (b) is realized by applying the inter-
vention and counterfactual operations to the original model.
Recently, the learning-based methods attempt to model
the prior knowledge of the spatial relationships among the
objects, so the agent could infer the likely locations of the
goal based on the current observation (which objects are
observed currently) and the prior knowledge (the spatial
relationships between the goal and the observed objects)
learned in the training stage. Some works utilize additional
modules to construct the objects graph [15, 59, 60], the re-
gion graph [63] and the attention mechanism [32], while
others [16, 56] employ a network that implicitly learns the
spatial relationships end-to-end. All these methods attempt
to establish prior knowledge in training environments, so
that the agent would utilize the prior knowledge to associate
the real-time observations with the goal, and infer the likely
locations of the goal during the inference. The underlying
assumption of these methods is that all of the object lay-
outs in unseen environments should be exactly consistent
with those in training environments. However, the layout
consistency assumption is only partially correct due to the
limited training data. Thus, those methods typically suffer
from poor generalization [31] in unseen environments.
To reveal the causes of poor generalization, we propose
to use the casual graph (i.e. Structural Causal Model, SCM
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10792
[38]) to analyze these navigation works. As illustrated in
Fig. 1 (a), the navigation model takes the observation Sand
the goal Gas the input, and predicts the action Aat each
timestamp. The causal links S→ZandG→Zrepresent
that the observation and the goal are embedded by the pre-
constructed modules [15, 32, 59, 60, 63] or the pre-trained
network [16, 56]. The embedding vector is defined as the
experience Zin the causal graph, which introduces the prior
knowledge to influence action prediction ( Z→A). Mean-
while, the real-time observation and the goal also indepen-
dently affect the prediction without being encoded by the
prior knowledge module, which is represented as S→A
andG→A, respectively. The causal links S→Aand
G→Arepresent the exploration-based effect (only related
to the current episode) on the action prediction, which is dif-
ferent from the experience-based effect Z→A. Consider
two cases of the layout gap between the current environment
and the prior knowledge: 1) the layout gap is minor and 2)
the layout gap is significant. In the former case, the object
layout is consistent in the current environment and the prior
knowledge. Thus, the experience Zexerts a positive effect
on the prediction of action A. However, the effect of experi-
enceZin the latter case could be negative. If the agent still
relies on the “negative” experience to predict actions, it will
suffer from poor generalization. Therefore, wisely utilizing
the experience is essential to the ObjectNav task.
Motivated by wisely utilizing the learned experience,
we propose the soft Total Direct Effect (sTDE) framework
based on the Total Direct Effect analysis in causal inference.
Our sTDE improves the generalization of the trained model
in inference by eliminating the prediction bias brought by
the experience. To decouple the effect of experience, we
construct the counter-fact prediction ¯a: the prediction is
only affected by the experience Zwhile ignoring the Sand
G, as shown in Fig. 1 (b). Then we propose the object
layout estimator that calculates whether the effect of the ex-
perience is positive, by measuring the layout gap between
the current environment and the prior knowledge. Further-
more, our sTDE will remove the counter-fact prediction ¯a
from the fact prediction awhen the layout gap is large.
In this paper, we propose the layout-based soft TDE
framework for the ObjectNav task. Specifically, we adopt
the Dirichlet-Multinomial distribution [22] to formulate the
contextual relationship between objects, which represents
the object layout of the environment. Before training, the
agent learns prior layout distribution (i.e. the prior parame-
ters of Dirichlet-Multinomial distribution) by randomly ex-
ploring the training environments. In the training stage,
based on the Bayesian inference, the agent estimates the
posterior layout distribution with the prior distribution and
newly obtained observations. Then the constantly updated
posterior layout is encoded into the navigation model and
utilized to learn the environment-adaptive experience. Theentire model is trained with RL by maximizing the reward
of reaching the goal. In the test stage, our agent will not di-
rectly use the trained policy as most previous works do. The
agent first calculates the layout gap and the counter-fact pre-
diction. The layout gap is determined by calculating the KL
divergence between the posterior and prior distribution of
object layouts and serves as a weight to determine whether
to remove the counter-fact prediction (i.e. experience ef-
fect) from the original prediction. The experimental results
on AI2THOR [27], RoboTHOR [12] and Habitat [48] in-
dicate that our layout-based sTDE (L-sTDE) can be a plug-
and-play method to boost existing methods to achieve better
navigation performances.
|
Ye_Partial_Network_Cloning_CVPR_2023 | Abstract
In this paper, we study a novel task that enables par-
tial knowledge transfer from pre-trained models, which we
term as Partial Network Cloning (PNC). Unlike prior meth-
ods that update all or at least part of the parameters in the
target network throughout the knowledge transfer process,
PNC conducts partial parametric “cloning” from a source
network and then injects the cloned module to the target,
without modifying its parameters. Thanks to the transferred
module, the target network is expected to gain additional
functionality, such as inference on new classes; whenever
needed, the cloned module can be readily removed from the
target, with its original parameters and competence kept
intact. Specifically, we introduce an innovative learning
scheme that allows us to identify simultaneously the com-
ponent to be cloned from the source and the position to be
inserted within the target network, so as to ensure the opti-
mal performance. Experimental results on several datasets
demonstrate that, our method yields a significant improve-
ment of 5%in accuracy and 50% in locality when com-
pared with parameter-tuning based methods. Our code is
available at https://github.com/JngwenYe/PNCloning.
| 1. Introduction
With the recent advances in deep learning, an increas-
ingly number of pre-trained models have been released
online, demonstrating favourable performances on various
computer vision applications. As such, many model-reuse
approaches have been proposed to take advantage of the
pre-trained models. In practical scenarios, users may re-
quest to aggregate partial functionalities from multiple pre-
trained networks, and customize a target network whose
competence differs from any network in the model zoo.
A straightforward solution to the functionality dynamic
changing is to re-train the target network using the origi-
nal training dataset, or to conduct finetuning together with
regularization strategies to alleviate catastrophic forget-
†Corresponding author.
…
𝐼𝑛𝑠𝑒𝑟𝑡(4)Source Models
𝐿𝑜𝑐𝑎𝑙(4)
Target ModelTransferableModules
𝐿𝑜𝑐𝑎𝑙(4)𝐿𝑜𝑐𝑎𝑙(4)
…
FunctionalAddition𝐼𝑛𝑠𝑒𝑟𝑡(4)
𝐼𝑛𝑠𝑒𝑟𝑡(4)…
…Figure 1. Illustration of partial network cloning. Given a set of
pre-trained source models, we “clone” the transferable modules
from the source, and insert them into the target model (left) while
preserving the functionality (right).
ting [3,19,39], which is known as continual learning. How-
ever, direct re-training is extremely inefficient, let alone the
fact that original training dataset is often unavailable. Con-
tinual learning, on the other hand, is prone to catastrophic
forgetting especially when the amount of data for finetun-
ing is small, which, unfortunately, often occurs in practice.
Moreover, both strategies inevitably overwrite the original
parameters of the target network, indicating that, without
explicitly storing original parameters of the target network,
there is no way to recover its original performance or com-
petence when this becomes necessary.
In this paper, we investigate a novel task, termed as
Partial Network Cloning (PNC), to migrate knowledge
from the source network, in the form of a transferable mod-
ule, to the target one. Unlike prior methods that rely on
updating parameters of the target network, PNC attempts to
clone partial parameters from the source network and then
directly inject the cloned module into the target, as shown
in Fig. 1. In other words, the cloned module is transferred
to the target in a copy-and-paste manner. Meanwhile, the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
20137
original parameters of the target network remain intact, in-
dicating that whenever necessary, the newly added module
can be readily removed to fully recover its original function-
ality. Notably, the cloned module per se is a fraction of the
source network, and therefore requirements no additional
storage expect for the lightweight adapters. Such flexibil-
ity to expand the network functionality and to detach the
cloned module without altering the base of the target or al-
locating extra storage, in turn, greatly enhances the utility
of pre-trained model zoo and largely enables plug-and-play
model reassembly.
Admittedly, the ambitious goal of PNC comes with sig-
nificant challenges, mainly attributed to the black-box na-
ture of the neural networks, alongside our intention to pre-
serve the performances on both the previous and newly-
added tasks of the target. The first challenge concerns the
localization of the to-be-cloned module within the source
network, since we seek discriminant representations and
good transferability to the downstream target task. The sec-
ond challenge, on the other hand, lies in how to inject the
cloned module to ensure the performance.
To solve these challenges, we introduce an innovative
strategy for PNC, through learning the localization and in-
sertion in an intertwined manner between the source and tar-
get network. Specifically, to localize the transferable mod-
ule in the source network, we adopt a local-performance-
based pruning scheme for parameter selection. To adap-
tively insert the module into the target network, we utilize a
positional search method in the aim to achieve the optimal
performance, which, in turn, optimizes the localization op-
eration. The proposed PNC scheme achieves performances
significantly superior to those of the continual learning set-
ting ( 5%∼10%), while reducing data dependency to 30%.
Our contributions are therefore summarized as follows.
• We introduce a novel yet practical model re-use setup,
termed as partial network cloning (PNC). In contrast
to conventional settings the rely on updating all or part
of the parameters in the target network, PNC migrates
parameters from the source in a copy-and-paste man-
ner to the target, while preserving original parameters
of the target unchanged.
• We propose an effective scheme towards solving PNC,
which conducts learnable localization and insertion of
the transferable module jointly between the source and
target network. The two operations reinforce each
other and together ensure the performance of the tar-
get network.
• We conduct experiments on four widely-used datasets
and showcase that the proposed method consis-
tently achieves results superior to the conventional
knowledge-transfer settings, including continual learn-
ing and model ensemble. |
Yu_TOPLight_Lightweight_Neural_Networks_With_Task-Oriented_Pretraining_for_Visible-Infrared_Recognition_CVPR_2023 | Abstract
Visible-infrared recognition (VI recognition) is a chal-
lenging task due to the enormous visual difference across
heterogeneous images. Most existing works achieve promis-
ing results by transfer learning, such as pretraining on
the ImageNet, based on advanced neural architectures like
ResNet and ViT. However, such methods ignore the neg-
ative influence of the pretrained colour prior knowledge,
as well as their heavy computational burden makes them
hard to deploy in actual scenarios with limited resources.
In this paper, we propose a novel task-oriented pretrained
lightweight neural network (TOPLight) for VI recognition.
Specifically, the TOPLight method simulates the domain
conflict and sample variations with the proposed fake do-
main loss in the pretraining stage, which guides the network
to learn how to handle those difficulties, such that a more
general modality-shared feature representation is learned
for the heterogeneous images. Moreover, an effective fine-
grained dependency reconstruction module (FDR) is devel-
oped to discover substantial pattern dependencies shared
in two modalities. Extensive experiments on VI person re-
identification and VI face recognition datasets demonstrate
the superiority of the proposed TOPLight, which signifi-
cantly outperforms the current state of the arts while de-
manding fewer computational resources.
| 1. Introduction
Identity recognition technologies have provided numer-
ous reliable solutions for monitoring systems, which strive
to match the face (face recognition [6, 7]) or pedestrian
(person re-identification [42]) images of the same identity.
However, the majority of previous efforts only consider vis-
ible images. In real-life practice, many surveillance cam-
eras can switch to infrared imaging mode at night. Thus,
the essential cross-modality visible-infrared recognition (VI
*Corresponding Author (Email: [email protected])
: mAP55.157.6ImageNet
-mini + Generic
variation ImageNet-1k
pretrained networksDual-path
trainingLearn to represent
''heterogenous'' features
Learn to embed
''heterogenous'' featuresDomain
Conflict
SP TOP
MobileNetV3-L(a)
TOP
ShuffleNetV2-1.5TOP
GhostNet-1.347.862.4
47.360.1
50.961.8
47.159.0
47.8 47.9
(b)+ Colour
variation
+ T
exture
variation
: Rank-1
SP SPFigure 1. (a) The task-oriented pretraining strategy; (b) Per-
formance comparison of the standard ImageNet-1k pretraining
scheme (SP) and the proposed task-oriented pretraining scheme
(TOP) on the SYSU-MM01 dataset [35] (All-Search mode).
recognition) technology has been developed to match the
visible and infrared photographs of the same people.
Recently, visible-infrared person re-identification (VI-
ReID) [26, 38] and visible-infrared face recognition [8, 13,
44] have been widely studied. The key issue is identi-
fying the modality-shared patterns. To this end, several
works [29, 30] use generative adversarial networks (GANs)
to implement cross-modality alignment at the pixel and fea-
ture levels. Others [4, 26, 38] design the dual-path feature
extraction network, coupled with inter-feature constraints,
to close the embedding space of two modalities. However,
these methods utilize at least one pretrained ResNet-50 [12]
backbone to extract solid features, which makes them un-
suitable for edge monitoring devices. Recent works [4, 38]
employ auxiliary models (e.g., pose estimation, graph rea-
soning) to relieve the modality discrepancy, which enhances
the performance on academic benchmarks but reduces the
real-time inference speed. Compared with conventional
deep networks (e.g., ResNet, ViT), lightweight networks
[11,15,24] can extract basal features rapidly. In VI recogni-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3541
tion tasks, however, the vast modality discrepancy renders
the performance of lightweight networks significantly infe-
rior to that of conventional deep networks. The main rea-
son is that lightweight networks lack the ability to identify
modality-shared patterns from heterogeneous images.
To address this issue, we present an effective task-
oriented pretraining (TOP) strategy. As shown in Fig. 1(a),
we first train a lightweight network on the ImageNet-1k
dataset to learn vision prior knowledge. After that, the
trained network is transformed into the dual-path network
and further trained by using task-oriented data augmenta-
tions, identity consistency loss and fake domain loss on
the ImageNet-mini dataset [18]. The task-oriented pretrain-
ing (TOP) strategy simulates the sample differences in VI
scenes and teaches the network how to represent and em-
bed discrepant features. Fig. 1(b) reports the performance
of three lightweight networks in the VI-ReID task. Com-
pared with the ImageNet-1k pretraining, our TOP strategy
can remarkably improve the baseline performance.
Another weakness of lightweight networks is that few
feature maps are learned from raw images for rapid infer-
ence. In the VI recognition scene, it is challenging to dis-
cover modality-shared patterns with so few learned feature
maps. In practice, the network can focus on a group of ag-
gregated modality-specific patterns that offer the most gra-
dient for identity classification. In contrast, the fine-grained
and modality-shared patterns, which are crucial for achiev-
ing robust cross-modality matching, are neglected.
Based on the above observations, we present a novel
fine-grained dependency reconstruction (FDR) module to
help lightweight networks learn modality-shared and fine-
grained patterns. Specifically, inspired by the horizontal
slice scheme [1], we first slice feature maps horizontally
and vertically to extract fine-grained patterns from diver-
sified local regions. Then, the original spatial dependen-
cies of these patterns are eliminated by using pooling oper-
ations. Further, the cross-modality dependencies are built
by using up-sampling layers to amplify the modality-shared
parts from these patterns. At last, to avoid overfitting, the
shuffle attention is designed to re-weight the channel depen-
dencies of all the feature maps, which spreads attention to
local patterns as much as possible. In general, the major
contributions of this paper can be summarized as follows.
• We propose an effective task-oriented pretrained
lightweight neural network (TOPLight) for VI recog-
nition. To the best of our knowledge, it is the first work
to develop a paradigm for VI recognition on edge de-
vices with an extremely low computation budget.
• An effective task-oriented pretraining strategy is pro-
posed to enhance the heterogeneous feature learning
capacity of lightweight networks with task-oriented
augmentations and the proposed fake domain loss.• A fine-grained dependency reconstruction module is
designed to mine cross-modality dependencies.
• Extensive experiments demonstrate that the proposed
method outperforms the state-of-the-art methods on
mainstream VI-ReID and VI face recognition datasets
by a remarkable margin and extremely low complexity.
|
Yu_Adaptive_Spot-Guided_Transformer_for_Consistent_Local_Feature_Matching_CVPR_2023 | Abstract
Local feature matching aims at finding correspondences
between a pair of images. Although current detector-free
methods leverage Transformer architecture to obtain an im-
pressive performance, few works consider maintaining lo-
cal consistency. Meanwhile, most methods struggle with
large scale variations. To deal with the above issues, we
propose Adaptive Spot-Guided Transformer (ASTR) for lo-
cal feature matching, which jointly models the local consis-
tency and scale variations in a unified coarse-to-fine archi-
tecture. The proposed ASTR enjoys several merits. First,
we design a spot-guided aggregation module to avoid in-
terfering with irrelevant areas during feature aggregation.
Second, we design an adaptive scaling module to adjust the
size of grids according to the calculated depth information
at fine stage. Extensive experimental results on five stan-
dard benchmarks demonstrate that our ASTR performs fa-
vorably against state-of-the-art methods. Our code will be
released on https://astr2023.github.io .
*Equal Contribution
†Corresponding Author | 1. Introduction
Local feature matching (LFM) is a fundamental task in
computer vision, which aims to establish correspondence
for local features across image pairs. As a basis for many
3D vision tasks, local feature matching can be applied in
Structure-from-Motion (SfM) [49], 3D reconstruction [13],
visual localization [48, 51], and pose estimation [18, 41].
Because of its broad applications, local feature matching
has attracted substantial attention and facilitated the devel-
opment of many researches [14, 27, 42, 44, 50]. However,
finding consistent and accurate matches is still difficult due
to various challenging factors such as illumination varia-
tions, scale changes, poor textures, and repetitive patterns.
To deal with the above challenges, numerous matching
methods have been proposed, which can be generally cat-
egorized into two major groups, including detector-based
matching methods [2, 14, 15, 39, 42, 47] and detector-free
matching methods [9, 23, 27, 43, 44, 50]. Detector-based
matching methods require to first design a keypoint de-
tector to extract the keypoints between two images, and
then establish matches between these extracted keypoints.
The quality of detected keypoints will significantly af-
fect the performance of detector-based matching methods.
Therefore, many works aim to improve keypoint detection
through multi-scale detection [36], repeatable and reliable
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21898
verification [42]. Thanks to the high-quality keypoints de-
tected, these methods can achieve satisfactory performance
while maintaining high computational and memory effi-
ciency. However, these detector-based matching methods
may have difficulty in finding reliable matches in textureless
areas, where keypoints are challenging to detect. Differ-
ently, detector-free matching methods do not need to detect
keypoints and try to establish pixel-level matches between
local features. In this way, it is possible to establish matches
in the texture-less areas. Due to the power of attention in
capturing long-distance dependencies, many Transformer-
based methods [9, 50, 52, 57] have emerged in recent years.
As a representative work, considering the computation and
memory costs, LoFTR [50] applies Linear Transformer [25]
to aggregate global features at the coarse stage and then
crops fixed-size grids for further refinement. To alleviate
the problem caused by scale changes, COTR [24] calculate
the co-visible area iteratively through attention mechanism.
The promising performance of Transformer-based methods
proves that attention mechanism is effective on local feature
matching. Nevertheless, some recent works [28, 60] indi-
cate Transformer lacks spatial inductive bias for continuous
dense prediction tasks, which may cause inconsistent local
matching results.
By studying the previous matching methods, we sum up
two issues that are imperative for obtaining the dense cor-
respondence between images. (1) How to maintain local
consistency. The correct matching result usually satisfies
the local matching consistency, i.e., for two similar adja-
cent pixels, their matching points are also extremely close
to each other. Existing methods [24,50,57] utilize global at-
tention in feature aggregation, introducing many irrelevant
regions that affect feature updates. Some pixels are dis-
turbed by noisy or similar areas and aggregate information
from wrong regions, leading to false matching results. As
shown in Figure 1 (b), for two adjacent similar pixels, high-
lighted regions of global linear attention are decentralized
and inconsistent with each other. The inconsistency is also
present in vanilla attention (see Figure 1 (c)). Therefore, it is
necessary to utilize local consistency to focus the attention
area on the correct place. (2) How to handle scale vari-
ation. In a coarse-to-fine architecture, since the attention
mechanism at the coarse stage is not sensitive to scale vari-
ations, we should focus on the fine stage. Previous meth-
ods [9, 27, 50, 57] select fixed-size grids for matching at the
fine stage. However, when the scale varies too much across
images, the correct match point may be out of the range of
the grid, resulting in matching failure. Hence, the scheme
of cropping grids should be adaptively adjusted according
to scale variation across views.
To deal with the above issues, we propose a novel Adap-
tive Spot-guided Transformer (ASTR) for consistent local
feature matching, including a spot-guided aggregation mod-ule and an adaptive scaling module. In the spot-guided ag-
gregation module , towards the goal of maintaining local
consistency, we design a novel attention mechanism called
spot-guided attention: each point is guided by similar high-
confidence points around it, focusing on a local candidate
region at each layer. Here, we also adopt global features
to enhance the matching ability of the network in the can-
didate regions. Specifically, for any point p, we pick the
points with high feature similarity and matching confidence
in the local area. Their corresponding matching regions are
used for the next attention of point p. In addition, global
features are applied to help the network to make judgments.
The coarse feature maps are iteratively updated in the above
way. With our spot-guided aggregation module, the red and
green pixels are guided to the correct area, avoiding the in-
terference of repetitive patterns (see Figure 1 (d)). In Fig-
ure 1 (e), our ASTR produces more accurate matching re-
sults, which maintains local matching consistency. In the
adaptive scaling module , to fully account of possible scale
variations, we attempt to adaptively crop different sizes of
grids for alignment. In detail, we compute the correspond-
ing depth map using the coarse matching result and leverage
the depth information to crop adaptive size grids from the
high-resolution feature maps for fine matching.
The contributions of our method could be summarized
into three-fold: (1) We propose a novel Adaptive Spot-
guided Transformer (ASTR) for local feature matching, in-
cluding a spot-guided aggregation module and an adap-
tive scaling module. (2) We design a spot-guided aggre-
gation module that can maintain local consistency and be
unaffected by irrelevant regions while aggregating features.
Our adaptive scaling module is able to leverage depth in-
formation to adaptively crop different size grids for refine-
ment. (3) Extensive experimental results on five challeng-
ing benchmarks show that our proposed method performs
favorably against state-of-the-art image matching methods.
|
Zeng_Learning_Transferable_Spatiotemporal_Representations_From_Natural_Script_Knowledge_CVPR_2023 | Abstract
Pre-training on large-scale video data has become a
common recipe for learning transferable spatiotemporal
representations in recent years. Despite some progress, ex-
isting methods are mostly limited to highly curated datasets
(e.g., K400) and exhibit unsatisfactory out-of-the-box rep-
resentations. We argue that it is due to the fact that they
only capture pixel-level knowledge rather than spatiotem-
poral semantics, which hinders further progress in video
understanding. Inspired by the great success of image-
text pre-training ( e.g., CLIP), we take the first step to ex-
ploit language semantics to boost transferable spatiotem-
poral representation learning. We introduce a new pre-
text task, Turning to Video for Transcript Sorting (TVTS),
which sorts shuffled ASR scripts by attending to learned
video representations. We do not rely on descriptive cap-
tions and learn purely from video, i.e., leveraging the natu-
ral transcribed speech knowledge to provide noisy but use-
ful semantics over time. Our method enforces the vision
model to contextualize what is happening over time so that
it can re-organize the narrative transcripts, and can seam-
lessly apply to large-scale uncurated video data in the real
world. Our method demonstrates strong out-of-the-box spa-
tiotemporal representations on diverse benchmarks, e.g.,
+13.6% gains over VideoMAE on SSV2 via linear prob-
ing. The code is available at https://github.com/
TencentARC/TVTS .
| 1. Introduction
The aspiration of representation learning is to encode
general-purpose representations that transfer well to di-
verse downstream tasks, where self-supervised methodolo-
gies [9, 25] dominate due to their advantage in exploitinglarge-scale unlabeled data. Despite significant progress in
learning representations of still images [23, 43], the real
world is dynamic and requires reasoning over time. In this
paper, we focus on out-of-the-box spatiotemporal represen-
tation learning , a more challenging but practical task to-
wards generic video understanding, which aims to capture
hidden representations that can be further used to conduct
reasoning on broader tasks, e.g., classification and retrieval.
There have been various attempts at self-supervised pre-
training on video data from discriminative learning ob-
jectives [5, 8, 27] to generative ones [17, 49], where the
core is context capturing in spatial and temporal dimen-
sions. Though promising results are achieved when trans-
ferring the pre-trained models to downstream video recog-
nition [22, 33, 48] via fine-tuning, the learned representa-
tions are still far away from out-of-the-box given the poor
linearly probing results (see Figure 1(a)). Moreover, exist-
ing works mostly develop video models on the highly cu-
rated dataset with particular biases, i.e., K400 [31]. Their
applicability in the real world is questioned given the ob-
served performance drops when training on a larger but
uncurated dataset, YT-Temporal [55]. We argue that, to
address the above issue, the rich spatiotemporal semantics
contained in the video itself should be fully exploited. But
current video models generally exploit visual-only percep-
tion ( e.g., pixels) without explicit semantics.
Recently, the success of CLIP [43] has inspired the com-
munity to learn semantically aware image representations
that are better transferable to downstream tasks and scal-
able to larger uncurated datasets. It provides a feasible so-
lution for improving spatiotemporal representation learning
but remains two key problems. (1) The vision-language
contrastive constraints in CLIP mainly encourage the un-
derstanding of static objects (noun contrast) and simple mo-
tions (verb contrast), while how to enable long-range tem-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23079
17.9 SOTA (K400)
Ours (YT) 31.515.9 SOTA (YT)+15.6%+13.6%
SSV2N/A SOTA (K400)
Ours (YT) 55.120.4 SOTA (YT)+34.7%
Kine?cs-400
52.7 SOTA (K400)
Ours (YT) 83.444.4 SOTA (YT)+39.0%+30.7%
UCF-10130.9 SOTA (K400)
Ours (YT) 58.423.9 SOTA (YT)+34.5%+27.5%
HMDB-51
(a) Linear Probe Accuracy (%)
What’s the order of transcripts? Reasoning from the video!
2⃣ I can’t ride…
1⃣ I take a picture…Out-of-order ASR Transcripts
(b) Our Mo? va?on
⋯Figure 1. (a) We evaluate the transferability of spatiotemporal representations via linear probing on four video recognition datasets [22,
31, 33, 48], where the state-of-the-art method [49] underperforms. It performs even worse when pre-trained with a large-scale uncurated
dataset, YT-Temporal [55]. (b) We encourage complex temporal understanding and advanced spatiotemporal representation learning with
a new pretext task of sorting transcripts.
poral understanding with language supervision needs to be
studied. (2) The quality of language supervision [47] is crit-
ical to the final performance of CLIP, however, it is hard to
collect large-scale video data with literal captions that care-
fully describe the dynamic content over time. The ideal way
for self-supervised learning is to learn useful knowledge
purely from the data itself, which is also the philosophy
followed by previous video pre-training methods [17, 49].
Fortunately, video data is naturally multi-modal with tran-
scribed speech knowledge in the form of text (ASR), pro-
viding time-dependent semantics despite some noise.
To facilitate spatiotemporal understanding in large-scale
uncurated data under the supervision of inherent script
knowledge, we introduce a new pretext task for video pre-
training, namely, Turning to Video for Transcript Sorting
(TVTS). Intuitively, people sort out the order of events by
temporal reasoning. As illustrated in Figure 1(b), given
several unordered transcripts, it is difficult to reorganize
the narrative by merely understanding the literal semantics.
When the corresponding video is provided, it will be much
easier to sort the transcripts by contextualizing what is hap-
pening over time. Whereas in neural networks, the tem-
poral inference is embedded in spatiotemporal representa-
tions. Thus we believe that if the chronological order of
transcripts can be correctly figured out via resorting to the
correlated video representations, the video has been well
understood.
We realize the pretext task of TVTS by performing joint
attention among the encoded video spatiotemporal repre-
sentations and the extracted ASR transcript representations.
Specifically, given an input video and its successive tran-
scripts, we randomly shuffle the order of the sentences.Subsequently, we concatenate the encoded script repre-
sentations and the video representations and perform self-
attention to predict the actual orders of the shuffled tran-
scripts by fully understanding the spatiotemporal seman-
tics in the video. The order prediction is cast as a K-way
classification task, where Kis the number of transcripts.
The pretext task indirectly regularizes our model to prop-
erly capture contextualized spatiotemporal representations
to provide enough knowledge for transcript ordering.
The usage of language supervision is related to video-
text alignment [4, 20] and multimodal representation learn-
ing [18, 55] methods, however, we are completely differ-
ent. (1) Video-text alignment methods focus on retrieval
tasks and are devoted to associating the vision patterns with
language concepts. They are generally single-frame bi-
ased [34] and fail to encode strong out-of-the-box tempo-
ral representations. (2) Multimodal representation learning
methods aim to learn fused representations across modali-
ties rather than vision-only spatiotemporal representations
in our work. Moreover, different from our pretext task
that aims to optimize spatiotemporal video representations,
[55] sorts video frames by taking the features of individual
frames as inputs without temporal modeling, i.e., learning
video representations only at the image level. As [55] points
out, its ordering pretext task is not critical for downstream
tasks (performance even drops) and primarily serves as an
interface to query the model about temporal events.
To summarize, our contributions are three-fold. ( i) We
exploit the rich semantics from script knowledge which is
naturally along with the video, rendering a flexible pre-
training method that can easily apply to uncurated video
data in the real world. ( ii) We introduce a novel pre-
23080
text task for video pre-training, namely, Turning to Video
for Transcript Sorting (TVTS). It promotes the capability
of the model in learning transferable spatiotemporal video
representations. ( iii) We conduct comprehensive compar-
isons with advanced methods. Our pre-trained model ex-
hibits strong out-of-the-box spatiotemporal representations
on downstream action recognition tasks, especially the rel-
atively large-scale and the most challenging SSV2 [22]. We
also achieve state-of-the-art performances on eight common
video datasets in terms of fine-tuning.
|
Zhang_Boosting_Video_Object_Segmentation_via_Space-Time_Correspondence_Learning_CVPR_2023 | Abstract
Current top-leading solutions for video object segmen-
tation (VOS) typically follow a matching-based regime: for
each query frame, the segmentation mask is inferred accor-
ding to its correspondence to previously processed and the
first annotated frames. They simply exploit the supervisory
signals from the groundtruth masks for learning mask pre-
diction only, without posing any constraint on the space-time
correspondence matching, which, however, is the fundamen-
tal building block of such regime. To alleviate this crucial yet
commonly ignored issue, we devise a correspondence-aware
training framework, which boosts matching-based VOS so-
lutions by explicitly encouraging robust correspondence ma-
tching during network learning. Through comprehensively
exploring the intrinsic coherence in videos on pixel and ob-
ject levels, our algorithm reinforces the standard, fully su-
pervised training of mask segmentation with label-free, con-
trastive correspondence learning. Without neither requiring
extra annotation cost during training, nor causing speed de-
lay during deployment, nor incurring architectural modifi-
cation, our algorithm provides solid performance gains on
four widely used benchmarks, i.e., DAVIS2016&2017, and
YouTube-VOS2018&2019, on the top of famous matching-
based VOS solutions.
| 1. Introduction
In this work, we address the task of (one-shot) video ob-
jectsegmentation(VOS)[5,73,96].Givenaninputvideowith
groundtruth object masks in the first frame, VOS aims at ac-
curately segmenting the annotated objects in the subsequent
frames. As one of the most challenging tasks in computer
vision, VOS benefits a wide range of applications including
augmented reality and interactive video editing [72].
Modern VOS solutions are built upon fully supervised
deep learning techniques and the top-performing ones [10,
12] largely follow a matching-based paradigm, where the
object masks for a new coming frame ( i.e., query frame) are
The first two authors contribute equally to this work.
yCorresponding author.
[10]
Figure 1. (a-b) shows some correspondences between a reference
frame and a query frame. (c) gives mask prediction. XMem [10],
even a top-leading matching-based VOS solution, still suffers from
unreliable correspondence. In contrast, with our correspondence-
aware training strategy, robust space-time correspondence can be
established, hence leading to better mask-tracking results.
generated according to the correlations between the query
frame and the previously segmented as well as first anno-
tated frames ( i.e., reference frames), which are stored in an
outside memory. It is thus apparent that the module for cross-
frame matching ( i.e., space-time correspondence modeling)
plays the central role in these advanced VOS systems. Nev-
ertheless, these matching-based solutions are simply trained
under the direct supervision of the groundtruth segmenta-
tion masks. In other words, during training, the whole VOS
system is purely optimized towards accurate segmentation
mask prediction, yet without taking into account any ex-
plicit constraint/regularization on the central component —
space-time correspondence matching. This comes with a le-
gitimate concern for sub-optimal performance, since there
is no any solid guarantee of truly establishing reliable cross-
frame correspondence during network learning. Fig. 1(a) of-
fers a visual evidence for this viewpoint. XMem [10], the
latest state-of-the-art matching-based VOS solution, tends
to struggle at discovering valid space-time correspondence;
indeed,somebackgroundpixels/patchesareincorrectlyreco-
gnized as highly correlated to the query foreground.
The aforementioned discussions motivate us to propose a
new, space-time correspondence-aware training framework
which addresses the weakness of existing matching-based
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2246
VOS solutions in an elegant and targeted manner. The core
idea is to empower the matching-based solutions with en-
hanced robustness of correspondence matching, through mi-
ning complementary yet freesupervisory signals from the
inherent nature of space-time continuity of training video
sequences. In more detail, we comprehensively investigate
the coherence nature of videos on both pixel and object lev-
els: i)pixel-level consistency : spatiotemporally proximate
pixels/patches tend to be consistent; and ii)object-level co-
herence : visual semantics of same object instances at differ-
ent timesteps tend to retain unchanged. By accommodating
these two properties to an unsupervised learning scheme, we
give more explicit direction on the correspondence match-
ing process, hence promoting the VOS model to learn dense
discriminative and object-coherent visual representation for
robust, matching-based mask tracking (see Fig. 1 (b-c)).
It is worth mentioning that, beyond boosting the segmen-
tation performance, our space-time correspondence-aware
training framework enjoys several compelling facets. First ,
our algorithm supplements the standard, fully supervised
training paradigm of matching-based VOS with self-training
of space-time correspondence. As a result, it does not cause
any extra annotation burden. Second , our algorithm is fully
compatible with current popular matching-based VOS solu-
tions [10, 12], without particular adaption to the segmenta-
tion network architecture. This is because the learning of the
correspondence matching only happens in the visual embed-
ding space. Third , as a training framework, our algorithm
does not produce additional computational budget to the ap-
plied VOS models during the deployment phase.
We make extensive experiments on various gold-standard
VOS datasets, i.e., DA VIS2016&2017 [52], and YouTube-
VOS2018&2019 [86]. We empirically prove that, on the top
of recent matching-based VOS models, i.e., STCN [12] and
XMem [10], our approach gains impressive results, surpass-
ing all existing state-of-the-arts. Concretely, in multi-object
scenarios, it improves STCN by 1.2%,2.3%, and 2.3%, and
XMem by 1.5%,1.2%, and 1.1% on DA VIS2017 val,Youtube-
VOS2018val, as well as Youtube-VOS2019 val, respectively,
in terms ofJ&F. Besides, it respectively promotes STCN
and XMem by 0.4% and 0.7% on single-object benchmark
dataset DA VIS2016 val.
|
Yin_NeRFInvertor_High_Fidelity_NeRF-GAN_Inversion_for_Single-Shot_Real_Image_Animation_CVPR_2023 | Abstract
Nerf-based Generative models have shown impressive
capacity in generating high-quality images with consistent
3D geometry. Despite successful synthesis of fake identity
images randomly sampled from latent space, adopting these
models for generating face images of real subjects is still a
challenging task due to its so-called inversion issue. In this
paper, we propose a universal method to surgically fine-tune
these NeRF-GAN models in order to achieve high-fidelity
animation of real subjects only by a single image. Given
the optimized latent code for an out-of-domain real image,
we employ 2D loss functions on the rendered image to re-
duce the identity gap. Furthermore, our method leverages
explicit and implicit 3D regularizations using the in-domain
neighborhood samples around the optimized latent code to
remove geometrical and visual artifacts. Our experiments
confirm the effectiveness of our method in realistic, high-
fidelity, and 3D consistent animation of real faces on multi-
ple NeRF-GAN models across different datasets.
| 1. Introduction
Animating a human with a novel view and expression
sequence from a single image opens the door to a wide
range of creative applications, such as talking head synthe-
sis [22, 34], augmented and virtual reality (AR/VR) [19],
image manipulation [24, 32, 45], as well as data augmenta-
tion for training of deep models [25,42,43]. Early works of
image animation mostly employed either 2D-based image
generation models [14, 26, 31, 37], or 3D parametric mod-
els [4, 11, 40, 41] ( e.g. 3DMM [6]), but they mostly suffer
from artifacts, 3D inconsistencies or unrealistic visuals.
Representing scenes as Neural Radiance Fields
(NeRF) [23] has recently emerged as a breakthrough
Project page: https : / / yuyin1 . github . io /
NeRFInvertor_Homepage/
NovelViews&ExpressionsNovelExpressionsNovelViewsInputFigure 1. Image animation results of our method. NeRFInver-
torachieves 3D-consistent and ID-preserving animation ( i.e. novel
views and expressions) of real subjects given only a single image.
approach for generating high-quality images of a scene in
novel views. However, the original NeRF models [5,33,44]
only synthesize images of a static scene and require exten-
sive multi-view data for training, restricting its application
to novel view synthesis from a single image. Several studies
have shown more recent advances in NeRFs by extending
it to generate multi-view face images with single-shot data
even with controllable expressions [7, 8, 12, 27, 38, 46].
These Nerf-based Generative models (NeRF-GANs) are
able to embed attributes of training samples into their latent
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
8539
variables, and synthesize new identity face images with
different expressions and poses by sampling from their
latent space.
While animatable synthesis of fake identity images is im-
pressive, it is still challenging to generate 3D-consistent and
identity-preserving images of real faces. Specifically, cur-
rent Nerf-GANs have difficulties to accurately translate out-
of-domain images into their latent space, and consequently
change identity attributes and/or introduce artifacts when
applied to most real-world images. In order to synthesize
real faces, the conventional method applies optimization
algorithms to invert the input image to a latent code in a
smaller ( i.e.W) or an extended ( i.e.W+) NeRF-GAN la-
tent space. However, they both either have ID-preserving or
artifacts issues as shown in Figure 2. The Wspace inver-
sion, in particular, generates realistic novel views and clean
3D geometries, but suffers from the identity gap between
the real and synthesized images. In contrast, the W+space
inversion well preserves the identity but commonly gener-
ates an inaccurate 3D geometry, resulting in visual artifacts
when exhibited from new viewpoints. Hence, it remains
as a trade-off to have a 3D-consistent geometry or preserve
identity attributes when inverting face images out of latent
space distribution.
In this paper, we present NeRFInvertor as a universal
inversion method for NeRF-GAN models to achieve high-
fidelity, 3D-consistent, and identity-preserving animation of
real subjects given only a single image. Our method is ap-
plicable to most of NeRF-GANs trained for a static ordy-
namic scenes, and hence accomplishes synthesis of real im-
ages with both novel views and novel expressions (see Fig-
ure 1). Since the real images are mostly out of the domain
of NeRF-GANs latent space, we surgically fine-tune their
generator to enrich the latent space by leveraging the single
input image without degrading the learned geometries.
In particular, given an optimized latent code for the in-
put image, we first use image space supervision to narrow
the identity gap between the synthesized and input images.
Without a doubt, the fine-tuned model can be overfitted on
the input image and well reconstruct the input in the origi-
nal view. However, fine-tuning with just image space super-
vision produces erroneous 3D geometry due to the insuffi-
cient geometry and content information in a single image,
resulting in visual artifacts in novel views. To overcome
this issue, we introduce regularizations using the surround-
ing samples in the latent space, providing crucial guidance
for the unobserved part in the image space. By sampling la-
tent codes from the neighborhood of optimized latent vari-
ables with different poses and expressions, we enforce a
novel geometric constraint on the density outputs of fine-
tuned and original pretrained generators. We also further
add regularizations on the rendered images of neighborhood
samples obtained from the fine-tuned and pretrained genera-
NovelviewInput
Rec.
𝒲inversion𝒲+inversionID:0.99ID:0.363Dshapes
ArtifacesIDgap
ArtifacesInaccurateAttributes:•Haircut•BeardIssuesFigure 2. Trade-off between ID-preserving and removing ar-
tifacts. Optimizing latent variables of Nerf-GANs for synthesis
of a real face leads to a trade-off between identity-preserving and
geometrical and visual artifacts. Specifically, Wspace inversion
results in clean geometry but identity gap between real and gener-
ated images, and W+space inversion causes preserving of iden-
tity attributes but inaccurate geometry and visual artifacts.
tors. These regularizations help us to leverage the geometry
and content information of those in-domain neighborhood
samples around the input. Our experiments validate the ef-
fectiveness of our method in realistic, high-fidelity, and 3D
consistent animating of real face images.
The main contributions of this paper are as follows:
1. We proposed a universal method for inverting NeRF-
GANs to achieve 3D-consistent, high-fidelity, and
identity-preserving animation of real subjects given
only a single image.
2. We introduce a novel geometric constraint by leverag-
ing density outputs of in-domain samples around the
input to provide crucial guidance for the unobserved
part in the 2D space.
3. We demonstrate the effusiveness of our method on
multiple NeRF-GAN models across different datasets.
|
Zhang_PeakConv_Learning_Peak_Receptive_Field_for_Radar_Semantic_Segmentation_CVPR_2023 | Abstract
The modern machine learning-based technologies have
shown considerable potential in automatic radar scene un-
derstanding. Among these efforts, radar semantic segmen-
tation (RSS) can provide more refined and detailed infor-
mation including the moving objects and background clut-
ters within the effective receptive field of the radar. Moti-
vated by the success of convolutional networks in various
visual computing tasks, these networks have also been in-
troduced to solve RSS task. However, neither the regular
convolution operation nor the modified ones are specific
to interpret radar signals. The receptive fields of existing
convolutions are defined by the object presentation in opti-
cal signals, but these two signals have different perception
mechanisms. In classic radar signal processing, the object
signature is detected according to a local peak response,
i.e., CFAR detection. Inspired by this idea, we redefine the
receptive field of the convolution operation as the peak re-
ceptive field (PRF) and propose the peak convolution oper-
ation (PeakConv) to learn the object signatures in an end-
to-end network. By incorporating the proposed PeakConv
layers into the encoders, our RSS network can achieve bet-
ter segmentation results compared with other SoTA meth-
ods on a multi-view real-measured dataset collected from
an FMCW radar. Our code for PeakConv is available at
https://github.com/zlw9161/PKC .
| 1. Introduction
Radar is a remote sensor, which usually uses modu-
lated electromagnetic signals to detect the objects of interest
through directional transmitting antennas in a specific effec-
tive working field [22]. As an active detection device, radar
is more robust to extreme weather ( e.g., haze, rain or snow)
than other active detection device such as LiDARs [2], and
it is also not susceptible to dim light condition and sun glare,
*Equal contribution.†Corresponding author. This research is sup-
ported by Young Science Foundation of National Natural Science Foun-
dation of China (No.62206258).as the passive optical sensors are [19]. In addition to the
real-world location information, it can also tell the velocity
of the moving objects thanks to the Doppler effects. Due to
these advantages, radar sensors have played an irreplaceable
role for many automotive security and defense applications,
e.g., autonomous safety driving or UA V early warning.
Conventional radar detection mostly relies on the peak
detection algorithm following constant false alarm rate
(CFAR) [22, 23] principle. Taking frequency modulated
continuous wave (FMCW) radar as example, the raw radar
echos are first converted as multi-domain united frequency
representations, e.g., range-Doppler (RD) and range-angle
(RA) maps, through a series of cascading fast Fourier trans-
formations (FFTs). Then for each cell under test (CUT) in
the input RD/RA map, the CFAR detector will determine
whether it contains moving object information according to
an estimated detection threshold, which fully considers the
characteristics of the radar signal itself. However, to ob-
tain good effect in practical application, it is necessary to
manually fine-tune various hyper-parameters including the
thresholding factor, sizes and shapes of the local scope ( i.e.,
the bandwidths of reference and guard units). Beyond that,
conventional radar detection cannot give category informa-
tion of the object. These two inconveniences hinder the con-
ventional detection method from automatic semantic radar
scene understanding.
Encouraged by the success of modern deep learning
techniques in computational perception, especially the ob-
ject detection [8, 15, 20, 21, 29] and semantic segmenta-
tion [5, 11, 16, 24, 28] in computer vision, some efforts had
been made recently for better automatic radar scene inter-
pretation. These efforts evolve the target-clutter binary hy-
pothesis of conventional radar testing into target semantic
characterization of modern machine learning, i.e., radar ob-
ject detection (ROD) [10, 17, 27] and radar semantic seg-
mentation (RSS) [3, 13, 18]. Most of these methods used
convolution networks as backbone models, which take radar
frequency representations as input, and then make predic-
tions on RA or RD view or both two views. For example,
a multi-view RSS (MVRSS) network [18] was proposed
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
17577
Figure 1. An examplar illustration of moving object signa-
tures/presentations in (a) the 2D RD map and the corresponding
(b) RD-amplitude 3D representation of radar signals, and their (c)
synchronized camera image.
to take better advantage of radar localization capability by
making “unit-wise” predictions on both RD and RA fre-
quency domains. To support the sufficient training of these
deep models, a few large-scale radar datasets were also col-
lected and created, e.g., OxfordRobotCar [9], nuScenes [4],
CRUW [26] and CARRADA [19].
However, the electromagnetic object signatures received
by radar are not as intuitively understood as the optical ones
captured by the cameras as shown in Fig. 1. With rich tex-
ture and color information in the image, the convolution op-
eration can learn useful semantic information from a rectan-
gular local spatial receptive field (RF). And by introducing
some intuitive priors of human vision, more efficient learn-
ing mechanisms for convolution had been proposed, e.g.,
multi-scale fusion [12,15,25], dilation [5,28] and deforma-
tion [8, 29]. So far, these mechanisms are also introduced
into radar data processing, such as the inception or pyramid
pooling for multi-scale information, atrous convolution for
larger dilated RF and deformable convolution for irregular
object signature in ROD-Net [27] and MVRSS [18]. De-
spite the multi-scale mechanism, which is more of a modu-
lar idea, i.e., the computation is decoupled from the convo-
lution itself, other variants are actually changing the RF it-
self. One conclusion might be summed up that, the RF sam-
pling/selection manner plays a very important role in convo-
lution. While none of these RF selection manners including
the regular one is proposed specifically for the radar data,
thus they might not fully exploit the potential of convolu-
tional networks in radar scene understanding. This concern
motivates us to rethink the internal relation between convo-
lution and the conventional radar detection mechanism, and
try to find a more efficient and specific convolution mecha-nism for radar data.
To achieve our goal, we take a look inside of the con-
ventional radar detection method and the convolution op-
eration in deep learning. As aforementioned, the conven-
tional detection method is a kind of CFAR-based peak de-
tection, e.g., commonly used cell averaging-CFAR (CA-
CFAR) [22]. For a CUT, xc, of the input RD representation,
CA-CFAR detection can be divided into three steps: (i) av-
eraging aggregation from reference cells {x(i)
r}N
i=1around
CUT, excluding the guard cells; (ii) threshold computing,
Θ = ξ·1
NPN
i=1x(i)
r; (iii) decision-making by comparing
xcandΘ. It can be seen that, the decision-making basis
is the difference between CUT and its threshold, i.e., the
weighted summation of {x(i)
r}N
i=1with a shared weight,ξ
N.
In another word, the key to determine whether the CUT
has object for CA-CFAR is the denoised peak frequency
response from an RF consisted of the CUT and its refer-
ence cells. Yet none of the convolution operators mentioned
above can explicitly possess such property, i.e., each output
unit is actually a weighted summation of the units in a local
dense/dilated rectangular or deformable RF, which does not
strictly follow the guard-reference policy.
Therefore, in this work we redefine the RF of the con-
volution operator as the guard-reference style, and call such
new type RF the peak receptive field (PRF), which consists
of the center unit and its reference neighbors. Then with
some simple computational designs, we present two novel
conv olution operations to explicitly learn the peak response
from PRF, i.e., PeakConvs. Compared with other convolu-
tion operations, PeakConvs explicitly possess the advantage
of the conventional radar detection methods. In comparison
with the conventional CA-CFAR, adaptive peak response
with learnable weights and high-level semantic representa-
tion via task-driven learning paradigm can be achieved since
PeakConvs maintain the computational compatibility of the
regular convolution operation. The main contributions are:
•A novel convolution computing paradigm for radar
data processing . Instead of extracting radar signature
directly from RF, we propose learning peak response
from redefined PRF, which is more suitable for learn-
ing tasks related to radar data.
•Two implementations of the proposed PeakConv .
According to the participation of center unit dur-
ing interference ( e.g., device noises and background
clutters) estimation, there are two approaches of
PeakConv, including vanilla-PeakConv (PKC), and
response d ifference a ware PeakConv (ReDA-PKC).
•Well-performed multi-view RSS frameworks based
on PeakConvs : by introducing PeakConvs into
encoders of the convolutional automatic-encoder-
decoder (CAED) framework, two RSS networks with
17578
multi-input and multi-output (MIMO) style are pre-
sented. Our networks can achieve SoTA performance
on both RD and RA views.
|
Yi_Generating_Holistic_3D_Human_Motion_From_Speech_CVPR_2023 | Abstract
This work addresses the problem of generating 3D holistic
body motions from human speech. Given a speech record-
ing, we synthesize sequences of 3D body poses, hand ges-
tures, and facial expressions that are realistic and diverse.
To achieve this, we first build a high-quality dataset of 3D
holistic body meshes with synchronous speech. We then
define a novel speech-to-motion generation framework in
which the face, body, and hands are modeled separately.
The separated modeling stems from the fact that face artic-
ulation strongly correlates with human speech, while body
poses and hand gestures are less correlated. Specifically,
we employ an autoencoder for face motions, and a composi-
tional vector-quantized variational autoencoder (VQ-VAE)
for the body and hand motions. The compositional VQ-
VAE is key to generating diverse results. Additionally, we
propose a cross-conditional autoregressive model that gener-
ates body poses and hand gestures, leading to coherent and
*Equal Contribution.
†Joint Corresponding Authors.realistic motions. Extensive experiments and user studies
demonstrate that our proposed approach achieves state-of-
the-art performance both qualitatively and quantitatively.
Our dataset and code are released for research purposes at
https://talkshow.is.tue.mpg.de/ .
| 1. Introduction
From linguistics and psychology we know that humans
use body language to convey emotion and use gestures in
communication [ 22,28]. Motion cues such as facial expres-
sion, body posture and hand movement all play a role. For
instance, people may change their gestures when shifting
to a new topic [ 52], or wave their hands when greeting an
audience. Recent methods have shown rapid progress on
modeling the translation from human speech to body mo-
tion, and can be roughly divided into rule-based [ 38] and
learning-based [ 20,21,23,32,33,55] methods. Typically,
the body motion in these methods is represented as the mo-
tion of a 3D mesh of the face/upper-body [ 5,15,26,43,44],
or 2D/3D landmarks of the face with 2D/3D joints of the
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
469
hands and body [ 21,23,55]. However, this is not sufficient
to understand human behavior. Humans communicate with
their bodies, hands and facial expressions together. Captur-
ing such coordinated activities as well as the full 3D surface
in tune with speech is critical for virtual agents to behave
realistically and interact with listeners meaningfully.
In this work, we focus on generating the expressive 3D
motion of person, including their body, hand gestures, and
facial expressions, from speech alone; see Fig. 1. To do
this, we must learn a cross-modal mapping between audio
and 3D holistic body motion, which is very challenging in
practice for several reasons. First, datasets of 3D holistic
body meshes and synchronous speech recordings are scarce.
Acquiring them in the lab is expensive and doing so in the
wild has not been possible. Second, real humans often vary
in shape, and their faces and hands are highly deformable. It
is not trivial to generate both realistic and stable results of
3D holistic body meshes efficiently. Lastly, as different body
parts correlate differently with speech signals, it is difficult
to model the cross-modal mapping and generate realistic and
diverse holistic body motions.
We address the above challenges and learn to model the
conversational dynamics in a data-driven way. Firstly, to
overcome the issue of data scarcity, we present a new set of
3D holistic body mesh annotations with synchronous audio
from in-the-wild videos. This dataset was previously used
for learning 2D/3D gesture modeling with 2D body key-
point annotations [ 21] and 3D keypoint annotations of the
holistic body [ 23] by applying existing models separately.
Apart from facilitating speech and motion modeling, our
dataset can also support broad research topics like realistic
digital human rendering. Then, to support our data-driven
approach to modeling speech-to-motion translation, an ac-
curate holistic body mesh is needed. Existing methods have
focused on capturing either the body shape and pose isolated
from the hands and face [ 8,17,25,34,47,57,58,61], or
the different parts together, which often produces unreal-
istic or unstable results, especially when applied to video
sequences [ 18,40,62]. To solve this, we present SHOW,
which stands for “Synchronous Holistic Optimization in the
Wild”. Specifically, SHOW adapts SMPLify-X [ 40] to the
videos of talking persons, and further improves it in terms
of stability, accuracy, and efficiency through careful design
choices. Figure 2 shows example reconstruction results.
Lastly, we investigate the translation from audio to 3D
holistic body motion represented as a 3D mesh (Fig. 1). We
propose TalkSHOW, the first approach to autoregressively
synthesize realistic and diverse 3D body motions, hand ges-
tures and facial expression of a talking person from speech.
Motivated by the fact that the face (i.e. mouth region) is
strongly correlated with the audio signal, while the body and
hands are less correlated, or even uncorrelated, TalkSHOW
designs separate motion generators for different parts andgives each part full play. For the face part, to model the
highly correlated nature of phoneme-to-lip motion, we de-
sign a simple encoder-decoder based face generator that
encodes rich phoneme information by incorporating the pre-
trained wav2vec 2.0 [ 6]. On the other hand, to predict the
non-deterministic body and hand motions, we devise a novel
VQ-V AE [ 50] based framework to learn a compositional
quantized space of motion, which efficiently captures a di-
verse range of motions. With the learned discrete represen-
tation, we further propose a novel autoregressive model to
predict a multinomial distribution of future motion, cross-
conditioned between existing motions. From this, a wide
range of motion modes representing coherent poses can be
sampled, leading to realistic looking motion generation.
We quantitatively evaluate the realism and diversity of our
synthesized motion compared to ground truth and baseline
methods and ablations. To further corroborate our qualitative
results, we evaluate our approach through an extensive user
study. Both quantitative and qualitative studies demonstrate
the state-of-the-art quality of our speech-synthesized full
expressive 3D character animations.
|
Yang_Bootstrap_Your_Own_Prior_Towards_Distribution-Agnostic_Novel_Class_Discovery_CVPR_2023 | Abstract
Novel Class Discovery (NCD) aims to discover unknown
classes without any annotation, by exploiting the transfer-
able knowledge already learned from a base set of known
classes. Existing works hold an impractical assumption
that the novel class distribution prior is uniform, yet neglect
the imbalanced nature of real-world data. In this paper,
we relax this assumption by proposing a new challenging
task: distribution-agnostic NCD, which allows data drawn
from arbitrary unknown class distributions and thus ren-
ders existing methods useless or even harmful. We tackle
this challenge by proposing a new method, dubbed “Boot-
strapping Your Own Prior (BYOP)”, which iteratively es-
timates the class prior based on the model prediction it-
self. At each iteration, we devise a dynamic temperature
technique that better estimates the class prior by encour-
aging sharper predictions for less-confident samples. Thus,
BYOP obtains more accurate pseudo-labels for the novel
samples, which are beneficial for the next training itera-
tion. Extensive experiments show that existing methods suf-
fer from imbalanced class distributions, while BYOP1out-
performs them by clear margins, demonstrating its effec-
tiveness across various distribution scenarios.
| 1. Introduction
With the ever-increasing growth of massive unlabeled
data, our community is interested in mining and leveraging
the “dark” knowledge therein [2, 7, 28]. To this end, Novel
Class Discovery (NCD) [14] is considered as a pivotal step,
which aims to automatically recognize novel classes by par-
titioning the unlabeled data into different clusters with the
knowledge learned from a labeled base class set. Note
that the base knowledge is indispensable because clustering
without a prior is known as an ill-posed problem [20]—data
*Corresponding author.
1Code: https://github.com/muliyangm/BYOP .
Model
Model
Model
ModelT ransferLabeled base data “dog” Labeled base data “cat”
Unlabeled data ( imbalanced )With uniform prior
With uniform prior
Novel class 1 Novel class 2With no prior
Unlabeled data ( balanced )
Unlabeled data ( balanced ) Novel class 1 Novel class 2Novel class 1 Novel class 2(a)
(b)
(c)Figure 1. Novel Class Discovery (NCD) in different scenarios. (a)
NCD with no prior onbalanced unlabeled data. (b)NCD with
theuniform prior onbalanced unlabeled data. (c)NCD with the
uniform prior onimbalanced unlabeled data.
can always be clustered w.r.t. any feature dimension, e.g.,
color and background. Hence, the base set provides a pre-
liminary prior for defining class vs.non-class features, e.g.,
the object background feature is removed for discovering
new classes.
Yet, clustering is still ambiguous to other features not re-
moved by the base knowledge. As shown in Fig. 1(a), if we
do not specify the class distribution prior, i.e., #sample per
class, the two clusters may be considered as red vs.other
color , but not the desired moose vs.cow. Therefore,
clustering with such a specified prior is a common practice
in existing NCD methods [11,34,47]. However, they hold a
na¨ıve assumption that the class distribution in the unlabeled
data is balanced, i.e., the prior is uniform. This is imprac-
tical because the nature of data distribution—especially for
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3459
(a) Clustering with Class Prior (b) Class Distribution Prediction
New
class prior
0.55
0.33
0.11
1 2 3(c) Class Prior Estimation100010001
pseudo-labels 1 2 3 1 2 3 1 2 3Classifier with dynamic temperature
001100
5
1max
0103 132
1235
13predictions
123 5
3
1training
samples For next iterationFigure 2. The training pipeline of our proposed BYOP for
distribution-agnostic NCD. (a)At each iteration, BYOP clusters
the unlabeled data using the class prior coming from the previ-
ous iteration to generate pseudo-labels (Sec. 3.1). (b)BYOP is
trained to predict the novel class distributions using the generated
pseudo-labels, where we devise a dynamic temperature technique
to encourage more confident predictions (Sec. 3.2). (c)The class
prior is estimated by calculating the proportion of each class as-
signment, which is ready to use for the next iteration (Sec. 3.3).
large-scale data—is imbalanced [23, 31, 33]. As shown in
the comparison between Figs. 1(b) and (c), if the data is
imbalanced, the uniform prior is misleading.
In this paper, we relax such an impractical assump-
tion by allowing novel data drawn from an arbitrary un-
known class distribution. We term this new challenging task
distribution-agnostic NCD , which renders existing meth-
ods useless or even harmful when the novel data is highly-
imbalanced. The crux of the problem is the prior itself—on
one hand, it is a critical ingredient against cluster ambigui-
ties; on the other hand, it becomes misleading when it mis-
matches with the true class distribution. This gives rise to
a chicken-egg problem in distribution-agnostic NCD, as the
class distribution is no longer known as a priori. We pro-
pose to address this dilemma by “ Bootstrapping YourOwn
Prior” (BYOP /baI"6p/ )—iteratively estimating the class
distribution based on the model prediction itself, which can
be used as a prior to obtain more accurate pseudo-labels that
help the next training iteration.
The BYOP pipeline is summarized in Fig. 2. Given a
batch of unlabeled data with an arbitrary unknown class
distribution, we deploy a clustering method [1] that parti-
tions the data subject to the current class prior. At each
iteration, the current class prior estimation is not yet ac-
curate ( e.g., we initialize by the uniform prior), and thus
may result in ambiguous clusters for the minority classes if
the true class distribution is highly-imbalanced (Fig. 2(a)).The cluster assignments are used as pseudo-labels to train
a classifier to discover novel classes. However, due to the
imperfections in pseudo-labels, the predicted class distribu-
tions are inevitably ambiguous, especially for those minor-
ity classes. To this end, we propose a dynamic tempera-
ture technique that can be integrated into the classifier to
output more confident distribution predictions (Fig. 2(b)).
The main idea is to encourage sharper predicted distribu-
tions for less-confident data by a per-sample temperature
adjustment. In particular, we call it “adaptive” because it
won’t hurt the prediction for the samples which are already
confident, while significantly disambiguating those who are
less confident, as later discussed in Fig. 3.
To estimate the class prior, we gather the predicted novel
class distributions as the class assignments for the training
samples, and calculate the proportion of each class assign-
ment (Fig. 2(c)), so that we can derive a new class prior
that is beneficial for the next training iteration. Note that
the higher prediction accuracy for majority classes guar-
antees to estimate a preliminary prior that helps generate
more accurate pseudo-labels, which in turn promotes the
reliability of the prior estimation for other classes via more
accurate model predictions. We benchmark our proposed
BYOP and the current state-of-the-art methods in the chal-
lenging distribution-agnostic NCD task on several standard
datasets. While current methods suffer from imbalanced
class distributions, BYOP outperforms them by large mar-
gins, demonstrating its effectiveness across different class
distributions, including the conventionally balanced one.
To sum up, our contributions are three-fold:
• A new challenging distribution-agnostic NCD task that
relaxes the impractical uniform class distribution as-
sumption in current NCD works.
• A novel training paradigm dubbed BYOP to handle ar-
bitrary unknown class distributions in NCD by itera-
tively estimating and utilizing the class prior.
• Extensive experiments that benchmark the current
state-of-the-art methods as well as the superiority of
the proposed BYOP in distribution-agnostic NCD.
|
Yi_Towards_Artistic_Image_Aesthetics_Assessment_A_Large-Scale_Dataset_and_a_CVPR_2023 | Abstract
Image aesthetics assessment (IAA) is a challenging task
due to its highly subjective nature. Most of the current stud-
ies rely on large-scale datasets (e.g., AVA and AADB) to
learn a general model for all kinds of photography images.
However, little light has been shed on measuring the aes-
thetic quality of artistic images, and the existing datasets
only contain relatively few artworks. Such a defect is a great
obstacle to the aesthetic assessment of artistic images. To
fill the gap in the field of artistic image aesthetics assess-
ment (AIAA), we first introduce a large-scale AIAA dataset:
Boldbrush Artistic Image Dataset (BAID), which consists
of 60,337 artistic images covering various art forms, with
more than 360,000 votes from online users. We then pro-
pose a new method, SAAN (Style-specific Art Assessment
Network), which can effectively extract and utilize style-
specific and generic aesthetic information to evaluate artis-
tic images. Experiments demonstrate that our proposed
approach outperforms existing IAA methods on the pro-
posed BAID dataset according to quantitative comparisons.
We believe the proposed dataset and method can serve as
a foundation for future AIAA works and inspire more re-
search in this field. Dataset and code are available at:
https://github.com/Dreemurr-T/BAID.git
| 1. Introduction
With the ever-growing scale of online visual data, image
aesthetic assessment (IAA) shows great potential in a vari-
ety of applications such as photo recommendation, image
ranking and image search [6]. In recent years, image style
transfer [9, 14, 19, 20, 26] and AI painting [15, 39] have be-
come high-profile research areas. Users can easily generate
artworks of numerous styles from websites and online ap-
plications, which has led to the explosion of artistic images
online and the drastic increase in demand for automatically
evaluating artwork aesthetics. We refer to this problem as
*Corresponding author.
Figure 1. Samples from the proposed BAID dataset. BAID covers
a wide range of artistic styles and painting themes.
artistic image aesthetic assessment (AIAA) .
The artistic image aesthetic assessment task is similar
to IAA for being extremely challenging due to its highly
subjective nature, as different individuals may have distinct
visual and art preferences. Existing datasets related to this
task can be summarized into three categories, but none of
them meets the requirements of the AIAA task: (1) IAA
datasets : modern IAA methods [13, 21, 23, 30, 32, 34] are
data-driven, usually trained and evaluated on large-scale
IAA datasets, e.g., A V A [25], AADB [17] and CUHK-
PQ [22]. However, these datasets only contain real-world
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
22388
photos and do not include artistic images like oil paintings
or pencil sketches. This deficiency of artistic images is
prevalent in existing IAA datasets [4, 16, 17, 22, 28], which
means that given an artwork, existing IAA methods evalu-
ate it based on perceptions learned from photography, and
the evaluation is likely to be inaccurate since the perceptual
rules of photography and art are not the same. (2) Artis-
tic datasets without aesthetic labels : existing large-scale
artistic image datasets [1, 29, 36] are mainly used to train
style transfer, artistic style classification or text to image
models, but they lack score annotations indicating image
aesthetic level. (3) Small-scale AIAA datasets : efforts into
building public AIAA datasets are scarce and the existing
datasets [3,8] contain relatively few number of images (less
than 2,000). Based on the above observations, we conclude
thatthe lack of a large-scale AIAA dataset is the biggest
obstacle towards developing AIAA approaches.
To solve the problem, we first introduce a large-scale
dataset specifically constructed for the AIAA task: the
Boldbrush Artistic Image Dataset (BAID), which consists
of 60,337 artworks annotated with more than 360,000 votes.
The proposed BAID is, to our knowledge, the largest AIAA
dataset, which far exceeds existing IAA and AIAA datasets
in the quantity and quality of artworks.
Furthermore, we propose a baseline model, called the
Style-specific Art Assessment Network (SAAN), which can
effectively exploit the style features and the generic aes-
thetic features of the given artwork. Our model consists
of three modules: 1) Generic Aesthetic Feature Extrac-
tion Branch: inspired by the studies [27, 31], we adopt
a self-supervised learning scheme to train a Generic Aes-
thetic Branch to extract aesthetics-aware features. The self-
supervised scheme is based on the correlation between the
aesthetic quality of the images and degradation editing op-
erations. This essentially provides data augmentation such
that the model can better learn the quality of different art-
works. 2) Style-specific Aesthetic Feature Extraction
Branch: observing that the style of the artwork is critical
when assessing its aesthetic value and different styles need
to extract different style-related aesthetic features, we pro-
pose a Style-specific Aesthetic Branch to incorporate style
information into aesthetic features and extract style-specific
aesthetic features via adaptive instance normalization [14].
3)Spatial Information Fusion: we also add a non-local
block [35] into the proposed method to fuse spatial infor-
mation into the extracted aesthetic features.
The main contributions of our work are three-fold:
• We address the problem of artistic image aesthetics
assessment, and introduce a new large-scale dataset
BAID consisting of 60,337 artworks annotated with
more than 360,000 votes to facilitate research in this
direction.
• We propose a style-specific artistic image assessmentTable 1. Summary of IAA/AIAA datasets and our proposed BAID
dataset. BAID provides a significantly larger number of artistic
images and has user subjective votes.
Dataset Number of images Number of artistic images
DP Challenge [4] 16,509 –
Photo.Net [16] 20,278 –
CUHK-PQ [22] 17,673 –
A V A [25] 255,530 –
AADB [17] 10,000 –
FLICKR-AES [28] 40,000 –
PARA [37] 31,220 –
TAD66K [12] 66,327 1,200
JenAesthetic [3] 1,628 1,628
V APS [8] 999 999
BAID (Ours) 60,337 60,337
network called SAAN, which combines style-specific
and generic aesthetic features to evaluate artworks.
• We evaluate the state-of-the-art IAA approaches and
our proposed method on the proposed BAID dataset.
Our model achieves promising results on all the met-
rics, which clearly demonstrates the validity of our
model.
|
Yan_Two-Shot_Video_Object_Segmentation_CVPR_2023 | Abstract
Previous works on video object segmentation (VOS) are
trained on densely annotated videos. Nevertheless, ac-
quiring annotations in pixel level is expensive and time-
consuming. In this work, we demonstrate the feasibility of
training a satisfactory VOS model on sparsely annotated
videos—we merely require two labeled frames per train-
ing video while the performance is sustained. We term this
novel training paradigm as two-shot video object segmen-
tation, or two-shot VOS for short. The underlying idea is to
generate pseudo labels for unlabeled frames during train-
ing and to optimize the model on the combination of la-
beled and pseudo-labeled data. Our approach is extremely
simple and can be applied to a majority of existing frame-
works. We first pre-train a VOS model on sparsely an-
notated videos in a semi-supervised manner, with the first
frame always being a labeled one. Then, we adopt the pre-
trained VOS model to generate pseudo labels for all un-
labeled frames, which are subsequently stored in a pseudo-
label bank. Finally, we retrain a VOS model on both labeled
and pseudo-labeled data without any restrictions on the first
frame. For the first time, we present a general way to train
VOS models on two-shot VOS datasets. By using 7.3%and
2.9%labeled data of YouTube-VOS and DAVIS benchmarks,
our approach achieves comparable results in contrast to
the counterparts trained on fully labeled set. Code and
models are available at https://github.com/yk-
pku/Two-shot-Video-Object-Segmentation .
| 1. Introduction
Video object segmentation (VOS), also known as mask
tracking, aims to segment the target object in a video given
the annotation of the reference (or first) frame. Existing
approaches [7, 9, 21, 30, 37, 46, 52] are trained on densely
annotated datasets such as DA VIS [33, 34] and YouTube-
VOS [50]. However, acquiring dense annotations, partic-
*Corresponding authors.
……
Previous Methods (DenselyAnnotated Videos)
Ours (2Labeled Frames per Video)UnlabeledUnlabeled…
(a) Previous works on video object segmentation rely on densely annotated
videos. We present two-shot video object segmentation, which merely ac-
cesses two labeled frames per video.
87.981.0 80.8 80.691.585.283.0 82.791.385.182.982.7
6065707580859095
DAVIS 2016DAVIS 2017YouTube-VOS 2018YouTube-VOS 2019Score
2-shot STCNFull-set STCN2-shot STCN w/ Ours
(b) Comparison among naive 2-shot STCN, STCN trained on full set
and 2-shot STCN equipped with our approach on DA VIS 2016/2017 and
YouTube-VOS 2018/2019.
Figure 1. (a) Problem formulation. (b) Comparison among STCN
variants on various datasets.
ularly at the pixel level, is laborious and time-consuming.
For instance, the DA VIS benchmark consists of 60 videos,
each with an average of 70 labeled frames; the YouTube-
VOS dataset has an even larger amount of videos, and every
fifth frame of each video is labeled to lower the annotation
cost. It is necessary to develop data-efficient VOS models
to reduce the dependency on labeled data.
In this work, we investigate the feasibility of training a
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
2257
satisfactory VOS model on sparsely annotated videos. For
the sake of convenience, we use the term N-shot to denote
thatNframes are annotated per training video. Note that
1-shot is meaningless since it degrades VOS to the task of
image-level segmentation. We use STCN [9] as our base-
line due to its simplicity and popularity. Since at least
two labeled frames per video are required for VOS train-
ing, we follow the common practice to optimize a naive
2-shot STCN model on the combination of YouTube-VOS
and DA VIS, and evaluate on YouTube-VOS 2018/2019 and
DA VIS 2016/2017, respectively. We compare the native 2-
shot STCN with its counterpart trained on full set in Fig. 1b.
Surprisingly, 2-shot STCN still achieves decent results, for
instance, only a −2.1%performance drop is observed on
YouTube-VOS 2019 benchmark, demonstrating the practi-
cality of 2-shot VOS.
So far, the wealth of information present in unlabeled
frames is yet underexplored. In the last decades, semi-
supervised learning, which combines a small amount of
labeled data with a large collection of unlabeled data dur-
ing training, has achieved considerable success on vari-
ous tasks such as image classification [3, 39], object detec-
tion [40, 49] and semantic segmentation [14, 17]. In this
work, we also adopt this learning paradigm to promote 2-
shot VOS (see Fig. 1a). The underlying idea is to generate
credible pseudo labels for unlabeled frames during training
and to optimize the model on the combination of labeled
and pseudo-labeled data. Here we continue to use STCN [9]
as an example to illustrate our design principle, neverthe-
less, our approach is compatible with most VOS models.
Concretely, STCN takes a randomly selected triplet of la-
beled frames as input but the supervisions are only applied
to the last two—VOS requires the annotation of the first
frame as reference to segment the object of interest that ap-
peared in subsequent frames. This motivates us to utilize
the ground-truth for the first frame to avoid error propa-
gation during early training. Each of the last two frames,
nevertheless, can be either a labeled frame or an unlabeled
frame with a high-quality pseudo label. Although the per-
formance is improved with this straightforward paradigm,
the capability of semi-supervised learning is still underex-
plored due to the restriction of employing the ground truth
as the starting frame. We term the process described above
asphase-1 .
To take full advantage of unlabeled data, we lift the re-
striction placed on the starting frame, allowing it to be either
a labeled or pseudo-labeled frame. To be specific, we adopt
the VOS model trained in phase-1 to infer the unlabeled
frames for pseudo-labeling. After that, each frame is as-
sociated with a pseudo label that approximates the ground-
truth. The generated pseudo labels are stored in a pseudo-
label bank for the convenience of access. The VOS model
is then retrained without any restrictions—similar to howit is trained through supervised learning, but each frame
has either a ground-truth or a pseudo-label attached to it.
It is worth noting that, as training progresses, the predic-
tions become more precise, yielding more reliable pseudo
labels—we update the pseudo-label bank once we identify
such pseudo labels. The above described process is named
asphase-2 . As shown in Fig. 1b, our approach assembled
onto STCN, achieves comparable results ( e.g. 85.2% v.s
85.1% on DA VIS 2017, and 82.7% v.s 82.7% on YouTube-
VOS 2019) in contrast to its counterpart, STCN trained on
full set, though our approach merely accesses 7.3% and
2.9% labeled data of YouTube-VOS and DA VIS bench-
mark, respectively.
Our contributions can be summarized as follows:
• For the first time, we demonstrate the feasibility
of two-shot video object segmentation: two labeled
frames per video are almost sufficient for training a
decent VOS model, even without the use of unlabeled
data.
• We present a simple yet efficient training paradigm to
exploit the wealth of information present in unlabeled
frames. This novel paradigm can be seamlessly ap-
plied to various VOS models, e.g., STCN [9], RDE-
VOS [21] and XMem [7] in our experiments.
• Though we only access a small amount of labeled data
(e.g.7.3%for YouTube-VOS and 2.9%for DA VIS),
our approach still achieves competitive results in con-
trast to the counterparts trained on full set. For
example, 2-shot STCN equipped with our approach
achieves 85.1 %/82.7%on DA VIS 2017/YouTube-VOS
2019, which is +4.1 %/+2.1 %higher than the naive 2-
shot STCN while -0.1 %/-0.0%lower than the STCN
trained on full set.
|
Yang_Context_De-Confounded_Emotion_Recognition_CVPR_2023 | Abstract
Context-Aware Emotion Recognition (CAER) is a cru-
cial and challenging task that aims to perceive the emo-
tional states of the target person with contextual informa-
tion. Recent approaches invariably focus on designing so-
phisticated architectures or mechanisms to extract seem-
ingly meaningful representations from subjects and con-
texts. However, a long-overlooked issue is that a con-
text bias in existing datasets leads to a significantly unbal-
anced distribution of emotional states among different con-
text scenarios. Concretely, the harmful bias is a confounder
that misleads existing models to learn spurious correlations
based on conventional likelihood estimation, significantly
limiting the models’ performance. To tackle the issue, this
paper provides a causality-based perspective to disentan-
gle the models from the impact of such bias, and formu-
late the causalities among variables in the CAER task via
a tailored causal graph. Then, we propose a Contextual
Causal Intervention Module (CCIM) based on the backdoor
adjustment to de-confound the confounder and exploit the
true causal effect for model training. CCIM is plug-in and
model-agnostic, which improves diverse state-of-the-art ap-
proaches by considerable margins. Extensive experiments
on three benchmark datasets demonstrate the effectiveness
of our CCIM and the significance of causal insight.
| 1. Introduction
As an essential technology for understanding human in-
tentions, emotion recognition has attracted significant at-
tention in various fields such as human-computer interac-
tion [1], medical monitoring [28], and education [40]. Pre-
vious works have focused on extracting multimodal emo-
tion cues from human subjects, including facial expres-
sions [9, 10, 49], acoustic behaviors [2, 50, 52], and body
§Corresponding Author.
Engagement Engagement
Excitement
Happiness
PleasureAffection
Happiness
PleasureDisapproval
Disconnection
Disquietment
Doubt/Confusion
Engagement
Sadness
Testing phase
GT: GT:
GT: GT:GT:Training phase
Prediction
Kosti et al.
Affection
Anticipation
Engagement
Excitement
Pleasure
Kosti et al.
+ CCIM(ours)
Similar Context
Disapproval
Disconnection
Disquietment
Doubt/Confusion
Engagement
SadnessGrass,trees,
outdoors,etc
Figure 1. Illustration of the context bias in the CAER task. GT
means the ground truth. Most images contain similar contexts in
the training data with positive emotion categories. In this case,
the model learns the spurious correlation between specific contexts
and emotion categories and gives wrong results. Thanks to CCIM,
the simple baseline [19] achieves more accurate predictions.
postures [25, 53], benefiting from advances in deep learn-
ing algorithms [6, 7, 21, 26, 27, 43, 44, 46, 47, 54, 55, 59].
Despite the impressive improvements achieved by subject-
centered approaches, their performance is limited by natu-
ral and unconstrained environments. Several examples in
Figure 1 (left) show typical situations on a visual level. In-
stead of well-designed visual contents, multimodal repre-
sentations of subjects in wild-collected images are usually
indistinguishable ( e.g., ambiguous faces or gestures), which
forces us to exploit complementary factors around the sub-
ject that potentially reflect emotions.
Inspired by psychological study [3], recent works [19,22,
23, 29, 56] have suggested that contextual information con-
tributes to effective emotion cues for Context-Aware Emo-
tion Recognition (CAER). The contexts are considered to
include the place category, the place attributes, the objects,
or the actions of others around the subject [20]. The major-
ity of such research typically follows a common pipeline:
(1) Obtaining the unimodal/multimodal representations of
the recognized subject; (2) Building diverse contexts and
extracting emotion-related representations; (3) Designing
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
19005
(0.8, 1.0] [0, 0.2]020406080100120
(0.2, 0.4] (0.4, 0.6] (0.6, 0.8]Sc
ene Categories
Conditional Entropy (Anger)(0.8, 1.0] [0, 0.2]020406080100120
(0.2, 0.4] (0.4, 0.6] (0.6, 0.8]Sc
ene Categories
Conditional Entropy (Happy)
(0.8, 1.0] [0, 0.2]020406080100120
(0.2, 0.4] (0.4, 0.6] (0.6, 0.8]Sc
ene Categories
Conditional Entropy (Anger)(0.8, 1.0] [0, 0.2]020406080100120
(0.2, 0.4] (0.4, 0.6] (0.6, 0.8]Sc
ene Categories
Conditional Entropy (Happy)(a) EMOTIC Dataset
(b) CAER -S DatasetFigure 2. We show a toy experiment on the EMOTIC [20]
and CAER-S [22] datasets for scene categories of angry and
happy emotions. More scene categories with normalized zero-
conditional entropy reveal a strong presence of the context bias.
fusion strategies to combine these features for emotion la-
bel predictions. Although existing methods have improved
modestly through complex module stacking [12,23,51] and
tricks [16, 29], they invariably suffer from a context bias
of the datasets, which has long been overlooked. Recall-
ing the process of generating CAER datasets, different an-
notators were asked to label each image according to what
they subjectively thought people in the images with diverse
contexts were feeling [20]. This protocol makes the prefer-
ence of annotators inevitably affect the distribution of emo-
tion categories across contexts, thereby leading to the con-
text bias. Figure 1 illustrates how such bias confounds the
predictions. Intrigued, most of the images in training data
contain vegetated scenes with positive emotion categories,
while negative emotions in similar contexts are almost non-
existent. Therefore, the baseline [19] is potentially misled
into learning the spurious dependencies between context-
specific features and label semantics. When given test im-
ages with similar contexts but negative emotion categories,
the model inevitably infers the wrong emotional states.
More intrigued, a toy experiment is performed to ver-
ify the strong bias in CAER datasets. This test aims to ob-
serve how well emotions correlate with contexts ( e.g., scene
categories). Specifically, we employ the ResNet-152 [15]
pre-trained on Places365 [58] to predict scene categories
from images with three common emotion categories ( i.e.,
“anger”, “happy”, and “fear”) across two datasets. The top
200 most frequent scenes from each emotion category are
selected, and the normalized conditional entropy of each
scene category across the positive and negative set of a spe-
cific emotion is computed [30]. While analyzing correla-
tions between scene contexts and emotion categories in Fig-
ure 2 ( e.g., “anger” and “happy”), we find that more scene
categories with the zero conditional entropy are most likely
to suggest the significant context bias in the datasets, asit shows the presence of these scenes only in the positive
or negative set of emotions. Concretely, for the EMOTIC
dataset [20], about 40% of scene categories for anger have
zero conditional entropy while about 45% of categories for
happy ( i.e., happiness) have zero conditional entropy. As
an intuitive example, most party-related scene contexts are
present in the samples with the happy category and almost
non-existent in the negative categories. These observations
confirm the severe context bias in CAER datasets, leading
to distribution gaps in emotion categories across contexts
and uneven visual representations .
Motivated by the above observation, we attempt to em-
brace causal inference [31] to reveal the culprit that poi-
sons the CAER models, rather than focusing on beating
them. As a revolutionary scientific paradigm that facili-
tates models toward unbiased prediction, the most impor-
tant challenge in applying classical causal inference to the
modern CAER task is how to reasonably depict true causal
effects and identify the task-specific dataset bias. To this
end, this paper attempts to address the challenge and rescue
the bias-ridden models by drawing on human instincts, i.e.,
looking for the causality behind any association. Specifi-
cally, we present a causality-based bias mitigation strategy.
We first formulate the procedure of the CAER task via a
proposed causal graph. In this case, the harmful context
bias in datasets is essentially an unintended confounder
that misleads the models to learn the spurious correlation
between similar contexts and specific emotion semantics.
From Figure 3, we disentangle the causalities among the
input images X, subject features S, context features C,
confounder Z, and predictions Y. Then, we propose a
simple yet effective Contextual Causal Intervention Module
(CCIM) to achieve context-deconfounded training and use
thedo-calculus P(Y|do(X))to calculate the true causal ef-
fect, which is fundamentally different from the conventional
likelihood P(Y|X). CCIM is plug-in and model-agnostic,
with the backdoor adjustment [14] to de-confound the con-
founder and eliminate the impact of the context bias. We
comprehensively evaluate the effectiveness and superiority
of CCIM on three standard and biased CAER datasets. Nu-
merous experiments and analyses demonstrate that CCIM
can significantly and consistently improve existing base-
lines, achieving a new state-of-the-art (SOTA).
The main contributions can be summarized as follows:
• To our best knowledge, we are the first to investigate
the adverse context bias of the datasets in the CAER
task from the causal inference perspective and iden-
tify that such bias is a confounder, which misleads the
models to learn the spurious correlation.
• We propose CCIM, a plug-in contextual causal inter-
vention module, which could be inserted into most
CAER models to remove the side effect caused by the
19006
confounder and facilitate a fair contribution of diverse
contexts to emotion understanding.
• Extensive experiments on three standard CAER
datasets show that the proposed CCIM can facilitate
existing models to achieve unbiased predictions.
|
Yang_Towards_Effective_Adversarial_Textured_3D_Meshes_on_Physical_Face_Recognition_CVPR_2023 | Abstract
Face recognition is a prevailing authentication solution
in numerous biometric applications. Physical adversarial
attacks, as an important surrogate, can identify the weak-
nesses of face recognition systems and evaluate their ro-
bustness before deployed. However, most existing physical
attacks are either detectable readily or ineffective against
commercial recognition systems. The goal of this work is to
develop a more reliable technique that can carry out an end-
to-end evaluation of adversarial robustness for commercial
systems. It requires that this technique can simultaneously
deceive black-box recognition models and evade defensive
mechanisms. To fulfill this, we design adversarial textured
3D meshes ( AT3D ) with an elaborate topology on a human
face, which can be 3D-printed and pasted on the attacker’s
face to evade the defenses. However, the mesh-based op-
timization regime calculates gradients in high-dimensional
mesh space, and can be trapped into local optima with un-
satisfactory transferability. To deviate from the mesh-based
space, we propose to perturb the low-dimensional coeffi-
cient space based on 3D Morphable Model, which signifi-
cantly improves black-box transferability meanwhile enjoy-
ing faster search efficiency and better visual quality. Exten-
sive experiments in digital and physical scenarios show that
our method effectively explores the security vulnerabilities
of multiple popular commercial services, including three
recognition APIs, four anti-spoofing APIs, twoprevailing
mobile phones and twoautomated access control systems.
| 1. Introduction
Face recognition has become a prevailing authentication
solution in biometric applications, ranging from financial
payment to automated surveillance systems. Despite its
†Corresponding authors.
üüFace Recognition
Face Anti-spoofing
Owner (Victim)
AttackerFigure 1. Demonstration of physical black-box attacks for unlock-
ing one prevailing mobile phone. The attacker wearing the 3D-
printed adversarial mesh can successfully mislead the face recog-
nition model to be recognized as the victim, meanwhile evading
face anti-spoofing. More results are shown in Sec. 4.
blooming development [4, 26, 33], recent research in adver-
sarial machine learning has revealed that face recognition
models based on deep neural networks are highly vulnera-
ble to adversarial examples [10,41], leading to serious con-
sequences or security problems in real-world applications.
Due to the imperative need of evaluating model robust-
ness [30, 45], extensive attempts have been devoted to ad-
versarial attacks on face recognition models. Adversarial at-
tacks in the digital world [8,28,39,45] are characterized by
adding minimal perturbations to face images in the digital
space, aiming to evade being recognized or to impersonate
another identity. Since an adversary usually cannot access
the digital input of practical systems, physical adversarial
examples wearable for real human faces are more feasible
for evaluating their adversarial robustness. Some studies
have shown the success of physical attacks against popular
recognition models by adopting different attack types, such
as eyeglass frames [27, 28], hats [17] and stickers [29].
In spite of the remarkable progress, it is challenging
to launch practical andeffective physical attack methods
on automatic face recognition systems. First, the defen-
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
4119
Frames [28] AdvHat [17] FaceAdv [29] PadvFace [50] AdvMask [52] Face3DAdv [40] RHDE [35] Ours
3D attack types No Partially Partially No Yes Yes Partially Yes
Commercial recognition Yes No No No No No Yes Yes
Commercial defenses No No No No No Yes No Yes
Number of physical tests 10 3 10 10 30 10 3 50
Table 1. A comparison among different methods regarding whether using 3D attack types, commercial face recognition models, commercial
defenses, and the number of physical evaluation. Partially indicates that this method involved some geometric transformations to make 2D
patch approximately approach the realistic 3D patch.
sive mechanism [14, 42, 43, 46, 48] on face recognition, i.e.,
face anti-spoofing, has achieved impressive performance
among the academic and industry communities. Some pop-
ular defenses [18, 34, 49] have injected more sensors (such
as depth, multi-spectral and infrared cameras) to provide
more effective defenses. However, most of the physical at-
tacks have not evaluated the passing rates against practi-
cal defensive mechanisms, as reported in Table. 1. Second,
these methods cannot perform satisfactorily for imperson-
ation attacks against diverse commercial black-box recog-
nition models due to the limited black-box transferability.
The goal of this work is to develop practical andeffective
physical adversarial attacks that can simultaneously deceive
black-box recognition models and evade defensive mecha-
nisms in commercial face recognition systems, e.g., unlock-
ing mobile phones, as demonstrated in Fig. 1.
Evading the defensive mechanisms. Recent research
has found that high-fidelity 3D masks [19, 21] can better
fool the prevailing face anti-spoofing methods by 3D print-
ing techniques. It becomes an appealing and feasible way to
apply a 3D adversarial mask for evading defensive mecha-
nisms in face recognition systems. To achieve this goal, we
first design adversarial textured 3D meshes ( AT3D ) with
an elaborate topology on a human face, which can be us-
able by standard graphics software such as Blender [9]
and Maya [22]. As a primary 3D representation, textured
meshes can be immediately 3D-printed and pasted on real
faces for physical adversarial attacks, which have geometric
details, complex topology and high-quality textures. Exper-
imentally, AT3D can be more conducive to steadily passing
commercial face anti-spoofing services, such as FaceID and
Tencent anti-spoofing APIs, two mobile phones and two ac-
cess control systems with multiple sensors .
Misleading the black-box recognition models. The
typical 3D mesh attacks [23, 36, 47] proposed to optimize
adversarial examples in mesh representation space. Thus,
high complexity is virtually inevitable for calculating gradi-
ents in such high-dimensional search space due to the thou-
sands of triangle faces on each human face. The procedures
are also costly and probably trapped into overfitting [20]
with unsatisfactory transferability. Therefore, we aim to
perform the optimization trajectory in a low-dimensional
manifold as a regularization aiming for escaping from over-
fitting. The low-dimensional manifold should possess a
sufficient capacity that encodes any 3D face in this low-dimensional feature space, thus successfully achieving the
white-box adversarial attack against a substitute model. A
principled way of spanning such a subspace is considered
by leveraging 3D Morphable Model (3DMM) [31] that ef-
fectively achieves dimensionality reduction of any high-
dimensional mesh data. Based on this, we are capable
of generating an adversarial mesh by perturbing the low-
dimensional coefficients of 3DMM, making it constrained
on the data manifold of realistic 3D faces. Therefore, the
crafted mesh can obtain a strong semantic feature of a
3D face, which can achieve well-generalizing performance
among the white-box and black-box models due to knowl-
edgable semantic pattern characteristics [37, 38, 44]. In ad-
dition, low-dimensional optimization can also avoid self-
intersection and flying vertices problems in mesh-based op-
timization [47], resulting in better visual appearance.
Experimentally, we have effectively explored the secu-
rity vulnerabilities of multiple popular commercial services,
including 1) recognition APIs—Amazon, Face++, and Ten-
cent; 2) anti-spoofing APIs—FaceID, SenseID, Tencent,
and Aliyun; 3) twoprevailing mobile phones and twoauto-
mated access control systems that incorporate multiple sen-
sors. Our main contributions can be summarized as:
• We propose effective and practical adversarial textured
3D meshes with elaborate topology and effective opti-
mization, which can simultaneously evade black-box
recognition models and defensive mechanisms.
• Extensive physical experiments demonstrate that our
method can consistently mislead multiple commer-
cial systems, including unlocking prevailing mobile
phones and automated access control systems.
• We present a reliable technique to evaluate the robust-
ness of face recognition systems, which can be further
leveraged as an effective data augmentation strategy to
improve defensive ability.
|
Yu_ANetQA_A_Large-Scale_Benchmark_for_Fine-Grained_Compositional_Reasoning_Over_Untrimmed_CVPR_2023 | Abstract
Building benchmarks to systemically analyze different
capabilities of video question answering (VideoQA) models
is challenging yet crucial. Existing benchmarks often
use non-compositional simple questions and suffer from
language biases, making it difficult to diagnose model
weaknesses incisively. A recent benchmark AGQA [8] poses
a promising paradigm to generate QA pairs automatically
from pre-annotated scene graphs, enabling it to measure
diverse reasoning abilities with granular control. However,
its questions have limitations in reasoning about the fine-
grained semantics in videos as such information is absent
in its scene graphs. To this end, we present ANetQA, a
large-scale benchmark that supports fine-grained compo-
sitional reasoning over the challenging untrimmed videos
from ActivityNet [4]. Similar to AGQA, the QA pairs
in ANetQA are automatically generated from annotated
video scene graphs. The fine-grained properties of ANetQA
are reflected in the following: (i) untrimmed videos with
fine-grained semantics; (ii) spatio-temporal scene graphs
with fine-grained taxonomies; and (iii) diverse questions
generated from fine-grained templates. ANetQA attains 1.4
billion unbalanced and 13.4 million balanced QA pairs,
which is an order of magnitude larger than AGQA with
a similar number of videos. Comprehensive experiments
are performed for state-of-the-art methods. The best model
achieves 44.5% accuracy while human performance tops
out at 84.5%, leaving sufficient room for improvement.
| 1. Introduction
Recent advances in deep learning have enabled machines
to tackle complicated video-language tasks that involve
Jun Yu is the corresponding author
A: diving gearQ1: Is the diver jumping into the water before sea turtles graze
and swim in the ocean ?Example QA pairs
A: yes
Q1: Did the person twist the bottle after taking a picture ? A: yesExample QA pairs
Q2: What did the person hold after putting a phone down ? A: bottleaction object attribute relationship
Q4: What is the occupation of the person with black headwear ? A: diver
a manta ray swims in the
ocean over a reef.
various fish are seen swimming through the reefsea turtles graze and swim in the ocean
beginning of the video end of the videopeople suit up in diving
gear then jump into waterblue
diver
black
headwearperson
diving gearwearingjumping intowater fish
swimming
reefcrossing
greenblack
and
whitechasing
onmanta ray
manta ray
reefocean
in
sea turtle greenswimmingwrackgreenblue
eating
sea turtleinocean
blue
swimming
identical
Q3: What color is the swimming sea turtle before it is eating the wrack? A: green
Q2: What is the black object that the person is wearing before
various fish are seen swimming through the reef ?black
AGQAANetQA (ours)similar templatesFigure 1. Comparisons of ANetQA and AGQA [8]. The QA
pairs in both benchmarks are automatically generated from spatio-
temporal scene graphs by using handcrafted question templates.
Benefiting from the untrimmed long videos and fine-grained
scene graphs, our questions require more fine-grained reasoning
abilities than those in AGQA when similar templates are applied.
Moreover, the newly introduced attribute annotations allow us to
design many fine-grained question templates that are not supported
in AGQA ( e.g., “what color ” and “ what is the occupation ”).
both video and language clues, e.g., video-text retrieval,
video captioning, video temporal grounding, and video
question answering. Among these tasks, video question
answering (VideoQA) is one of the most challenging tasks
as it verifies multiple skills simultaneously. Taking the
question “ What is the black object that the person is
wearing before various fish are seen swimming through the
reef? ” in Figure 1 as an example, it requires a synergistic
understanding of both the video and question, together with
spatio-temporal reasoning to predict an accurate answer.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23191
To comprehensively evaluate the capabilities of existing
VideoQA models, several prominent benchmarks have been
established [11, 21, 29, 33, 38, 42, 43]. Despite their useful-
ness, they also have distinct shortcomings. Some bench-
marks use simulated environments to synthesize video con-
tents [29, 42], which provides controllable diagnostics over
different reasoning skills. However, the synthetic videos
lack visual diversity and the learned models on the bench-
marks cannot generalize to real-world scenarios directly.
Some real-world benchmarks generate QA pairs from off-
the-shelf video captions [38, 48] or human annotations [11,
21, 33, 43], which suffer from simple question expressions
and biased answer distributions. These weaknesses may be
exploited by models to make educated guesses to obtain the
correct answers without seeing video contents [24, 40].
One recent VideoQA benchmark AGQA poses a promis-
ing paradigm to address the above limitations [8]. AGQA
is built upon the real-world videos from Charades [32].
In contrast to previous benchmarks, AGQA adopts a two-
stage paradigm instead. For each video, a spatio-temporal
scene graph over representative frames is first annotated
by humans, which consists of spatially-grounded object-
relationship triplets and temporally-grounded actions. After
that, different types of questions are generated on top of
the scene graph using corresponding question templates,
enabling it to measure various reasoning abilities with
granular control. Despite the comprehensiveness of AGQA,
we argue that its foundation—the spatio-temporal scene
graph—has limitations in representing the fine-grained se-
mantics of videos. Specifically, their scene graphs encode
objects and relationships from limited taxonomies, which
arenotfine-grained enough for generating questions that
require reasoning about the detailed video semantics.
To this end, we introduce ANetQA1, a new benchmark
that supports fine-grained compositional reasoning over
complex web videos from ActivityNet [4]. Similar to the
strategy of AGQA, the QA pairs in ANetQA are automati-
cally generated from pre-annotated scene graphs. As shown
in Figure 1, we claim that ANetQA is more fine-grained
than AGQA in terms of the following:
(i) The benchmark is built upon untrimmed long videos
with fine-grained semantics. Each video may involves
multiple indoor or outdoor scenarios, containing com-
plicated interactions between persons and objects.
(ii) The spatio-temporal scene graph consists of fine-
grained objects ( e.g., “manta ray ”, “diving gear ”),
relationships ( e.g., “jumping into ”, “chasing ”), at-
tributes ( e.g., “swimming ”, “black and white ”), and
actions in natural language ( e.g., “a manta ray swims
in the ocean over a reef ”).
1Note that there is a VideoQA benchmark ActivityNet-QA [43] whose
QA pairs are fully annotated by humans. To avoid confusion, we name our
benchmark ANetQA.(iii) Benefiting from the fine-grained scene graphs, we
are able to design diverse question templates that
requires fine-grained compositional reasoning ( e.g.,
“what color ... ” and “ what is the occupation ... ”).
Benefiting from the above fine-grained characteristics,
ANetQA obtains 1.4B unbalanced and 13.4M balanced QA
pairs. To the best of our knowledge, ANetQA is the largest
VideoQA benchmark in terms of the number of questions.
Compared with the previous largest benchmark AGQA,
ANetQA is an order of magnitude larger than it with a
similar number of videos. We conduct comprehensive
experiments and intensive analyses on ANetQA for the
state-of-the-art VideoQA models, including HCRN [19],
ClipBERT [20], and All-in-One [35]. The best model
delivers 44.5% accuracy while human performance tops out
at 84.5%, showing sufficient room for future improvement.
The benchmark is available at here2.
|
Zhang_Dimensionality-Varying_Diffusion_Process_CVPR_2023 | Abstract
Diffusion models, which learn to reverse a signal de-
struction process to generate new data, typically require
the signal at each step to have the same dimension. We
argue that, considering the spatial redundancy in image
signals, there is no need to maintain a high dimension-
ality in the evolution process, especially in the early
generation phase. To this end, we make a theoretical
generalization of the forward diffusion process via signal
decomposition. Concretely, we manage to decompose an
image into multiple orthogonal components and control
the attenuation of each component when perturbing the
image. That way, along with the noise strength increasing,
we are able to diminish those inconsequential components
and thus use a lower-dimensional signal to represent the
source, barely losing information. Such a reformulation
allows to vary dimensions in both training and inference
of diffusion models. Extensive experiments on a range of
datasets suggest that our approach substantially reduces
the computational cost and achieves on-par or even better
synthesis performance compared to baseline methods. We
also show that our strategy facilitates high-resolution image
synthesis and improves FID of diffusion model trained on
FFHQ at 1024×1024 resolution from 52.40 to 10.46. Code
is available at https://github.com/damo-vilab/dvdp.
∗corresponding author.
†Work performed at Alibaba DAMO Academy. | 1. Introduction
Diffusion models [2, 6, 9, 15, 21, 24, 28] have recently
shown great potential in image synthesis. Instead of directly
learning the observed distribution, it constructs a multi-
step forward process through gradually adding noise onto
the real data ( i.e., diffusion). After a sufficiently large
number of steps, the source signal could be considered
completely destroyed, resulting in a pure noise distribution
that naturally supports sampling. In this way, starting from
sampled noises, we can expect new instances after reversing
the diffusion process step by step.
As it can be seen, the above pipeline does not change
the dimension of the source signal throughout the entire
diffusion process [6,26,28]. It thus requires the reverse pro-
cess to map a high-dimensional input to a high-dimensional
output at every single step, causing heavy computation
overheads [10, 22]. However, images present a measure of
spatial redundancy [4] from the semantic perspective ( e.g.,
an image pixel could usually be easily predicted according
to its neighbours). Given such a fact, when the source
signal is attenuated to some extent along with the noise
strength growing, it should be possible to get replaced by
a lower-dimensional signal. We therefore argue that there
is no need to follow the source signal dimension along
the entire distribution evolution process, especially at early
steps ( i.e., steps close to the pure noise distribution) for
coarse generation.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
14307
Forward process
Reverse processDVDPDDPM
Figure 2. Conceptual comparison between DDPM [6] and our proposed DVDP, where our approach allows using a varying dimension in
the diffusion process.
In this work, we propose dimensionality-varying diffu-
sion process (DVDP), which allows dynamically adjusting
the signal dimension when constructing the forward path.
The varying dimensionality concept is shown in Fig. 2. For
this purpose, we first decompose an image into multiple
orthogonal components, each of which owns dimension
lower than the original data. Then, based on such a
decomposition, we theoretically generalize the conventional
diffusion process such that we can control the attenuation of
each component when adding noise. Thanks to this refor-
mulation, we manage to drop those inconsequential com-
ponents after the noise strength reaches a certain level, and
thus represent the source image using a lower-dimensional
signal with little information lost. The remaining diffusion
process could inherit this dimension and apply the same
technique to further reduce the dimension.
We evaluate our approach on various datasets, including
objects, human faces, animals, indoor scenes, and outdoor
scenes. Experimental results suggest that DVDP achieves
on-par or even better synthesis performance than baseline
models on all datasets. More importantly, DVDP relies on
much fewer computations, and hence speeds up both train-
ing and inference of diffusion models. We also demonstrate
the effectiveness of our approach in learning from high-
resolution data. For example, we are able to start from a
64×64noise to produce an image under 1024×1024 resolu-
tion. With FID [5] as the evaluation metric, our 1024×1024
model trained on FFHQ improves the baseline [28] from
52.40 to 10.46. All these advantages benefit from using a
lower-dimensional signal, which reduces the computational
cost and mitigates the optimization difficulty.
|
Zhang_Regularized_Vector_Quantization_for_Tokenized_Image_Synthesis_CVPR_2023 | Abstract
Quantizing images into discrete representations has been
a fundamental problem in unified generative modeling.
Predominant approaches learn the discrete representation
either in a deterministic manner by selecting the best-
matching token or in a stochastic manner by sampling from
a predicted distribution. However, deterministic quantiza-
tion suffers from severe codebook collapse and misalign-
ment with inference stage while stochastic quantization suf-
fers from low codebook utilization and perturbed recon-
struction objective. This paper presents a regularized vec-
tor quantization framework that allows to mitigate above
issues effectively by applying regularization from two per-
spectives. The first is a prior distribution regularization
which measures the discrepancy between a prior token dis-
tribution and the predicted token distribution to avoid code-
book collapse and low codebook utilization. The second is
a stochastic mask regularization that introduces stochastic-
ity during quantization to strike a good balance between in-
ference stage misalignment and unperturbed reconstruction
objective. In addition, we design a probabilistic contrastive
loss which serves as a calibrated metric to further miti-
gate the perturbed reconstruction objective. Extensive ex-
periments show that the proposed quantization framework
outperforms prevailing vector quantization methods con-
sistently across different generative models including auto-
regressive models and diffusion models.
| 1. Introduction
With the prevalence of multi-modal image synthesis
[3, 23, 37, 39] and Transformers [31], unifying data mod-
eling regardless of data modalities has attracted increas-
ing interest from the research communities. Aiming for a
generic data representation across different data modalities,
discrete representation learning [21, 25] plays a significant
role in the unified modeling. In particular, vector quantiza-
tion models (e.g., VQ-V AE [21] and VQ-GAN [8]) emerge
as a promising family for learning generic image represen-
tations by discretizing images into discrete tokens. With the
*Corresponding author, E-mail: [email protected]
VQ-GANGumbel-VQRegularized Quantization
Figure 1. Visualization of codebook (first row) and illustration of
codebook utilization (second row) on ADE20K dataset [42]. VQ-
GAN [8] severely suffers from codebook collapse as most code-
book embeddings are invalid values. Gumbel-VQ [2] learns valid
values for all codebook embeddings, while only a small number
of embeddings are actually used for quantization as illustrated in
codebook utilization. As a comparison, the proposed regularized
quantization prevents codebook collapse and achieves full code-
book utilization. The codebook visualization method is provided
in the supplementary file.
tokenized representation, generative models such as auto-
regressive model [8, 9] and diffusion model [6, 12] can be
applied to accommodate the dependency of the sequential
tokens for image generation, which is referred as tokenized
image synthesis under this context.
Vector quantization models can be broadly grouped into
deterministic quantization and stochastic quantization ac-
cording to the selection of discrete tokens. Specifically,
typical deterministic methods like VQ-GAN [8] directly se-
lect the best-matching token via Argmin or Argmax, while
stochastic methods like Gumbel-VQ [2] select a token by
stochastically sampling from a predicted token distribution.
On the other hand, deterministic quantization suffers from
codebook collapse [26], a well-known problem where large
portion of codebook embeddings are invalid with near-zero
values as shown in Fig. 1 (first row). In addition, determin-
istic quantization is misaligned with the inference stage of
generative modeling, where the tokens are usually randomly
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
18467
sampled instead of selecting the best matching one. In-
stead, stochastic quantization samples tokens according to a
predicted token distribution with Gumbel-Softmax [2, 14],
which allows to avoid codebook collapse and mitigate infer-
ence misalignment. However, although most codebook em-
beddings are valid values in stochastic quantization, only
a small part is actually utilized for vector quantization as
shown in Fig. 1 (second row), which is dubbed as low code-
book utilization. Besides, as stochastic methods randomly
sample tokens from a distribution, the image reconstructed
from the sampled tokens is usually not well aligned with
the original image, leading to perturbed reconstruction ob-
jective and unauthentic image reconstruction.
In this work, we introduce a regularized quantization
framework that allows to prevent above problems effec-
tively via regularization from two perspectives. Specifi-
cally, to avoid codebook collapse and low codebook uti-
lization where only a small number of codebook embed-
dings are valid or used for quantization, we introduce a
prior distribution regularization by assuming a uniform
distribution as the prior for token distribution. As the pos-
terior token distribution can be approximated by the quanti-
zation results, we can measure the discrepancy between the
prior token distribution and posterior token distribution. By
minimizing the discrepancy during training, the quantiza-
tion process is regularized to use all the codebook embed-
dings, which prevents the predicted token distribution from
collapse into a small number of codebook embeddings.
As deterministic quantization suffers from inference
stage misalignment and stochastic quantization suffers from
perturbed reconstruction objective, we introduce a stochas-
tic mask regularization to strike a good balance between
them. Specifically, the stochastic mask regularization ran-
domly masks certain ratio of regions for stochastic quanti-
zation, while leaving the unmasked regions for determinis-
tic quantization. This introduces uncertainty for the selec-
tion of tokens and results of quantization, which narrows the
gap with the inference stage of generative modelling where
tokens are selected randomly. We also conduct thorough
and comprehensive experiments to analyze the selection of
masking ratio for optimal image reconstruction and genera-
tion.
On the other hand, with the randomly sampled tokens,
the stochastically quantized region will suffer from per-
turbed reconstruction objective. The perturbed reconstruc-
tion objective mainly results from the target for perfect re-
construction of the original image from randomly sampled
tokens. Instead of naively enforcing a perfect image re-
construction with L1 loss, we introduce a contrastive loss
for elastic image reconstruction, which mitigates the per-
turbed reconstruction objective significantly. Similar to
PatchNCE [22, 41], the contrastive loss treats the patch at
the same spatial location as positive pairs and others as neg-ative pairs. By pushing the positive pairs closer and pulling
negative pairs away, the elastic image reconstruction can be
achieved. Another issue with the randomly sampled tokens
is that they tend to introduce perturbation of different scales
in the reconstruction objective, We thus introduce a Proba-
bilistic Contrastive Loss (PCL) that adjusts the pulling force
of different regions according to the discrepancy between
the sampled token embedding and the best-matching token
embedding.
The contributions of this work can be summarized in
three aspects. First, we present a regularized quantization
framework that introduces a prior distribution regulariza-
tion to prevent codebook collapse and low codebook utiliza-
tion. Second, we propose a stochastic mask regularization
which mitigates the misalignment with the inference stage
of generative modelling. Third, we design a probabilistic
contrastive loss that achieves elastic image reconstruction
and mitigates the perturbed objective adaptively for differ-
ent regions with stochastic quantization.
|
Zhang_PointDistiller_Structured_Knowledge_Distillation_Towards_Efficient_and_Compact_3D_Detection_CVPR_2023 | Abstract
The remarkable breakthroughs in point cloud represen-
tation learning have boosted their usage in real-world ap-
plications such as self-driving cars and virtual reality. How-
ever, these applications usually have a strict requirement
for not only accurate but also efficient 3D object detec-
tion. Recently, knowledge distillation has been proposed
as an effective model compression technique, which trans-
fers the knowledge from an over-parameterized teacher to
a lightweight student and achieves consistent effectiveness
in 2D vision. However, due to point clouds’ sparsity and
irregularity, directly applying previous image-based knowl-
edge distillation methods to point cloud detectors usually
leads to unsatisfactory performance. To fill the gap, this
paper proposes PointDistiller, a structured knowledge dis-
tillation framework for point clouds-based 3D detection.
Concretely, PointDistiller includes local distillation which
extracts and distills the local geometric structure of point
clouds with dynamic graph convolution and reweighted
learning strategy, which highlights student learning on the
crucial points or voxels to improve knowledge distillation
efficiency. Extensive experiments on both voxels-based and
raw points-based detectors have demonstrated the effective-
ness of our method over seven previous knowledge distilla-
tion methods. For instance, our 4 ×compressed PointPillars
student achieves 2.8 and 3.4 mAP improvements on BEV
and 3D object detection, outperforming its teacher by 0.9
and 1.8 mAP , respectively. Codes are available in https:
//github.com/RunpeiDong/PointDistiller .
| 1. Introduction
The growth in large-scale lidar datasets [14] and the
achievements in end-to-end 3D representation learning [46,
47] have boosted the developments of point cloud based seg-
mentation, generation, and detection [25, 48]. As one of the
essential tasks of 3D computer vision, 3D object detection
†The first two authors contribute equally. This work is done during the
internship of L. Zhang in DIDI. K. Ma is the corresponding author.
60 62 64 66 68 70 72 74 76 78
Teacher
(n/a)Student
(4X)Student
(16X)Teacher
(n/a)Student
(4X)Student
(16X)Teacher
(n/a)Student
(8X)Student
(16X)BEV Detection
PointPillars SECOND PointRCNN
51 56 61 66 71 76
Teacher
(n/a)Student
(4X)Student
(16X)Teacher
(n/a)Student
(4X)Student
(16X)Teacher
(n/a)Student
(8X)Student
(16X)3D Detection
PointPillars SECOND PointRCNNmAP on KITTI mAP on KITTI Figure 1. Experimental results (mAP of moderate difficulty) of our
methods on 4 ×, 8×, and 16 ×compressed students on KITTI. The
area of dash lines indicates the benefits of knowledge distillation.
plays a fundamental role in real-world applications such as
autonomous driving cars [3, 6, 14] and virtual reality [43].
However, recent research has shown a growing discrepancy
between cumbersome 3D detectors that achieve state-of-the-
art performance and lightweight 3D detectors which are
affordable in real-time applications on edge devices. To ad-
dress this problem, sufficient model compression techniques
have been proposed, such as network pruning [18,35,37,73],
quantization [8,12,40], lightweight model design [21,38,51],
and knowledge distillation [20].
Knowledge distillation, which aims to improve the per-
formance of a lightweight student model by training it to
mimic a pre-trained and over-parameterized teacher model,
has evolved into one of the most popular and effective model
compression methods in both computer vision and natu-
ral language processing [20, 50, 52, 66]. Sufficient theoret-
ical and empirical results have demonstrated its effective-
ness in image-based visual tasks such as image classifica-
tion [20, 50], semantic segmentation [33] and object detec-
tion [1, 5, 28, 71]. However, compared with images, point
clouds have their properties: (i) Point clouds inherently lack
topological information, which makes recovering the local
topology information crucial for the visual tasks [26, 39, 65].
(ii) Different from images that have a regular structure, point
clouds are irregularly and sparsely distributed in the metric
space [13, 15].
These differences between images and point clouds have
hindered the image-based knowledge distillation methods
from achieving satisfactory performance on point clouds and
also raised the requirement to design specific knowledge dis-
tillation methods for point clouds. Recently, a few methods
have been proposed to apply knowledge distillation to 3D
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
21791
㊫࡛ 1 23 45 6 7 8 9 10 11 12 13 14 14 15 16 17+68 %
16 %
7%
3%
Number of Points in the Voxel Ratio of Voxels 25 %50 %75 %Figure 2. Distribution of the voxels with different number of points
inside them. V oxels with no points are not included here.
detection [17, 53]. However, most of these methods focus
on the choice of student-teacher in a multi-modal setting,
e.g., teaching point clouds-based student detectors with an
images-based teacher or vice versa, and still ignore the pe-
culiar properties of point clouds. To address this problem,
we propose a structured knowledge distillation framework
named PointDistiller, which involves local distillation to
distill teacher knowledge in the local geometric structure of
point clouds, and reweighted learning strategy to handle the
sparsity of point clouds by highlight student learning on the
relatively more crucial voxels or points.
Local Distillation Sufficient recent studies show that captur-
ing and making use of the semantic information in the local
geometric structure of point clouds have a crucial impact on
point cloud representation learning [47, 64]. Hence, instead
of directly distilling the backbone feature of teacher detec-
tors to student detectors, we propose local distillation, which
firstly clusters the local neighboring voxels or points with
KNN (K-Nearest Neighbours), then encodes the semantic
information in local geometric structure with dynamic graph
convolutional layers [64], and finally distill them from teach-
ers to students. Hence, the student detectors can inherit the
teacher’s ability to understand point clouds’ local geometric
information and achieve better detection performance.
Reweighted Learning Strategy One of the mainstream meth-
ods for processing point clouds is to convert them into volu-
metric voxels and then encode them as regular data. How-
ever, due to the sparsity and the noise in point clouds, most
of these voxels contain only a single point. For instance, as
shown in Figure 2, on the KITTI dataset, around 68% voxels
in point clouds contain only one point, which has a high
probability of being a noise point. Hence, the representative
features in these single-point voxels have relatively lower im-
portance in knowledge distillation compared with the voxels
which contain multiple points. Motivated by this observation,
we propose a reweighted learning strategy, which highlights
student learning on the voxels with multiple points by giv-
ing them larger learning weights. Besides, the similar idea
can also be easily extended to raw points-based detectors
to highlight knowledge distillation on the points with more
considerable influence on the prediction.Extensive experiments on both voxels-based and raw-
points based detectors have been conducted to demonstrate
the effectiveness of our method over the previous seven
knowledge distillation methods. As shown in Figure 1, on
PointPillars and SECOND detectors, our method leads to 4 ×
compression and 0.9 ∼1.8 mAP improvements at the same
time. On PointRCNN, our method leads to 8 ×compression
with only 0.2 BEV mAP drop. Our main contributions be
summarized as follows.
•We propose local distillation , which firstly encodes the
local geometric structure of point clouds with dynamic
graph convolution and then distills them to students.
•We propose reweighted learning strategy to handle the
sparsity and noise in point clouds. It highlights stu-
dent learning on the voxels, which have more points
inside them, by giving them higher learning weights in
knowledge distillation.
•Extensive experiments on both voxels-based and raw
points-based detectors have been conducted to demon-
strate the performance of our method over seven previ-
ous methods. Besides, we have released our codes to
promote future research.
|
Yang_Vid2Seq_Large-Scale_Pretraining_of_a_Visual_Language_Model_for_Dense_CVPR_2023 | Abstract
In this work, we introduce Vid2Seq, a multi-modal
single-stage dense event captioning model pretrained on
narrated videos which are readily-available at scale. The
Vid2Seq architecture augments a language model with spe-
cial time tokens, allowing it to seamlessly predict event
boundaries and textual descriptions in the same output se-
quence. Such a unified model requires large-scale training
data, which is not available in current annotated datasets.
We show that it is possible to leverage unlabeled narrated
videos for dense video captioning, by reformulating sen-
tence boundaries of transcribed speech as pseudo event
boundaries, and using the transcribed speech sentences as
pseudo event captions. The resulting Vid2Seq model pre-
trained on the YT-Temporal-1B dataset improves the state
of the art on a variety of dense video captioning bench-
marks including YouCook2, ViTT and ActivityNet Captions.
Vid2Seq also generalizes well to the tasks of video para-
graph captioning and video clip captioning, and to few-shot
settings. Our code is publicly available at [1].
| 1. Introduction
Dense video captioning requires the temporal localiza-
tion and captioning of all events in an untrimmed video [45,
98, 127]. This differs from standard video captioning [62,
69, 79], where the goal is to produce a single caption for
a given short video clip. Dense captioning is significantly
more difficult, as it raises the additional complexity of lo-
calizing the events in minutes-long videos. However, it also
*This work was done when the first author was an intern at Google.benefits from long-range video information. This task is
potentially highly useful in applications such as large-scale
video search and indexing, where the video content is not
segmented into clips.
Existing methods mostly resort to two-stage ap-
proaches [36, 45, 96], where events are first localized and
then captioned. To further enhance the inter-task interac-
tion between event localization and captioning, some ap-
proaches have introduced models that jointly solve the two
tasks [19,98,127]. However, often these approaches still re-
quire task-specific components such as event counters [98].
Furthermore, they exclusively train on manually annotated
datasets of limited size [34, 45, 126], which makes it diffi-
cult to effectively solve the task. To address these issues,
we take inspiration from recent sequence-to-sequence mod-
els pretrained on Web data which have been successful on a
wide range of vision and language tasks [3,10,12,101,113].
First, we propose a video language model, called
Vid2Seq. We start from a language model trained on Web
text [77] and augment it with special time tokens that repre-
sent timestamps in the video. Given video frames and tran-
scribed speech inputs, the resulting model jointly predicts
all event captions and their corresponding temporal bound-
aries by generating a single sequence of discrete tokens, as
illustrated in Figure 1 (right). Such a model therefore has
the potential to learn multi-modal dependencies between
the different events in the video via attention [90]. However
this requires large-scale training data, which is not avail-
able in current dense video captioning datasets [34,45,126].
Moreover, collecting manual annotations of dense captions
for videos is expensive and prohibitive at scale.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10714
Hence we propose to pretrain Vid2Seq by leveraging un-
labeled narrated videos which are readily-available at scale.
To do this, we reformulate sentence boundaries of tran-
scribed speech as pseudo event boundaries, and use the tran-
scribed speech sentences as pseudo event captions. We then
pretrain Vid2Seq with a generative objective, that requires
predicting the transcribed speech given visual inputs, and
a denoising objective, which masks spans of transcribed
speech. Note that transcribed speech may not describe the
video content faithfully, and is often temporally misaligned
with the visual stream [31, 42, 70]. For instance, from the
example in Figure 1 (left), one can understand that the grey
skier has descended a slope from the last speech sentence
which is said after he actually descended the slope. Intu-
itively, Vid2Seq is particularly suited for learning from such
noisy supervision as it jointly models allnarrations and the
corresponding timestamps in the video.
We demonstrate the effectiveness of our pretrained
model through extensive experiments. We show the im-
portance of pretraining on untrimmed narrated videos, the
ability of Vid2Seq to use both the visual and speech modali-
ties, the importance of the pretraining objectives, the benefit
of joint caption generation and localization, as well as the
importance of the language model size and the scale of the
pretraining dataset. The pretrained Vid2Seq model achieves
state-of-the-art performance on various dense video cap-
tioning benchmarks [34, 45, 126]. Our model also excels
at generating paragraphs of text describing the video: with-
out using ground-truth event proposals at inference time,
our model outperforms all prior approaches including those
that rely on such proposals [49,75,124]. Moreover, Vid2Seq
generalizes well to the standard task of video clip cap-
tioning [8, 105]. Finally, we introduce a new few-shot
dense video captioning setting in which we finetune our pre-
trained model on a small fraction of the downstream train-
ing dataset and show benefits of Vid2Seq in this setting.
In summary, we make the following contributions:
(i)We introduce Vid2Seq for dense video captioning.
Given multi-modal inputs (transcribed speech and video),
Vid2Seq predicts a single sequence of discrete tokens that
includes caption tokens interleaved with special time to-
kens that represent event timestamps. (ii)We show that
transcribed speech and corresponding timestamps in unla-
beled narrated videos can be effectively used as a source
of weak supervision for dense video captioning. (iii) Fi-
nally, our pretrained Vid2Seq model improves the state of
the art on three dense video captioning datasets (YouCook2,
ViTT, ActivityNet Captions), two video paragraph caption-
ing benchmarks (YouCook2, ActivityNet Captions) and two
video clip captioning datasets (MSR-VTT, MSVD), and
also generalizes well to few-shot settings.
Our code implemented in Jax and based on the Scenic
library [18] is publicly released at [1]. |
Yao_DetCLIPv2_Scalable_Open-Vocabulary_Object_Detection_Pre-Training_via_Word-Region_Alignment_CVPR_2023 | Abstract
This paper presents DetCLIPv2, an efficient and scalable
training framework that incorporates large-scale image-
text pairs to achieve open-vocabulary object detection
(OVD). Unlike previous OVD frameworks that typically rely
on a pre-trained vision-language model (e.g., CLIP) or ex-
ploit image-text pairs via a pseudo labeling process, Det-
CLIPv2 directly learns the fine-grained word-region align-
ment from massive image-text pairs in an end-to-end man-
ner. To accomplish this, we employ a maximum word-region
similarity between region proposals and textual words to
guide the contrastive objective. To enable the model to gain
localization capability while learning broad concepts, Det-
CLIPv2 is trained with a hybrid supervision from detection,
grounding and image-text pair data under a unified data
formulation. By jointly training with an alternating scheme
and adopting low-resolution input for image-text pairs, Det-
CLIPv2 exploits image-text pair data efficiently and ef-
fectively: DetCLIPv2 utilizes 13 ×more image-text pairs
than DetCLIP with a similar training time and improves
†Corresponding author: [email protected],
[email protected]. With 13M image-text pairs for pre-training,
DetCLIPv2 demonstrates superior open-vocabulary detec-
tion performance, e.g., DetCLIPv2 with Swin-T backbone
achieves 40.4% zero-shot AP on the LVIS benchmark,
which outperforms previous works GLIP/GLIPv2/DetCLIP
by 14.4/11.4/4.5% AP , respectively, and even beats its fully-
supervised counterpart by a large margin.
| 1. Introduction
Traditional object detection frameworks [6,35,36,57] are
typically trained to predict a set of predefined categories,
which fails to meet the demand of many downstream appli-
cation scenarios that require to detect arbitrary categories
(denoted as open-vocabulary detection, OVD). For exam-
ple, a robust autonomous driving system requires accurate
predictions for all classes of objects on the road [26]. Ex-
tending traditional object detectors to adapt these scenar-
ios needs tremendous human effort for extra instance-level
bounding-box annotations, especially for rare classes. To
obtain an open-vocabulary detector without the expensive
annotation process, the central question we should ask is:
where does knowledge about unseen categories come from ?
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
23497
Recent works [16,44,51] try to achieve open-vocabulary
object detection by transferring knowledge from a pre-
trained vision-language (VL) model [20, 33, 49]. E.g.,
ViLD [16] distills the CLIP’s [33] image embeddings of
cropped proposals into the proposal features of a detection
model. However, these solutions suffer from the domain
gap problem: VL models are typically pre-trained with
an image-level supervision using a fixed resolution input,
which are not capable of recognizing objects with various
scales in the detection task, especially for small objects.
Another line of work resorts to exploiting massive
image-text pairs crawled from the Internet. To utilize the
image-text pair data without instance-level annotation, ap-
proaches [13, 14, 19, 27, 48, 54] generate pseudo-bounding-
box labels following a self-training paradigm [40] or based
on a pre-trained VL model [33]. However, their final per-
formance is restricted by the quality of pseudo-labels pro-
vided by a detector trained with limited human-annotated
concepts or a VL model suffering from the aforementioned
domain gap problem. Besides, using high-resolution inputs
similar to detection data for massive image-text pairs will
impose a huge computational burden on training, prevent-
ing us from further scaling up image-text pairs.
To address the above issues, we present DetCLIPv2, an
end-to-end open-vocabulary detection pre-training frame-
work that effectively incorporates large-scale image-text
pairs. DetCLILPv2 simultaneously learns localization ca-
pability and knowledge of broad concepts without relying
on a teacher model to provide pseudo labels. Specifically,
we perform joint training with heterogeneous data from
multiple sources, including detection [38], grounding [22]
and image-text pairs [7, 39], under a unified data formula-
tion. To enable image-text pairs without instance-level an-
notations to facilitate learning of detection, inspired by [49],
we employ an optimal matching-based set similarity be-
tween visual regions and textual concepts to guide the con-
trastive learning. By alternating different types of data for
training, we enable a “flywheel effect”: learning from de-
tection data provides accurate localization, which helps ex-
tract representative regions for contrastive learning, while
contrastive learning from image-text pairs helps recognize
broader concepts, which further improves the localization
of unseen categories. As the training goes on, the detector
learns to locate and recognize increasingly rich concepts.
Furthermore, to relief the computation burden brought
by large-scale image-text pairs, we adopt a low-resolution
input for image-text pair data, which significantly improves
the training efficiency. This is a reasonable design since the
caption of image-text pair data typically describes only the
main objects appearing in the image, which alleviates the
necessity of high-resolution training.
Benefiting from the effective designs, DetCLIPv2
demonstrates superior open-vocabulary detection perfor-
Image &
Caption Pseudo
Box
Annotations
(b) Pseudo label generation (a) Training with distillation
(c) Training with fine-grained word-region alignment (ours) Image
Caption Detector
Text Encoder...
...Region-level Embed.
Word-level Embed.Fine-grained
Contrastive
LossWord-Region SimilarityImage Detector ...
Region Embed....
DetectorVL model
DistillVL model ...Cropped Proposals Image Embed.Figure 2. Different OVD training paradigms. (a) Distilling
knowledge from a pre-trained VL model [16]. (b) Exploiting
image-text pairs via pseudo labeling [27]. (c) Our end-to-end joint
training eliminates complex multi-stage training schemes, allow-
ing for mutual benefits in learning from different types of data.
mance and promising scaling behavior. E.g., compared
to the prior work DetCLIP [48], DetCLIPv2 is able to
exploit 13 ×more image-text pairs while requiring only
a similar training time. Using the vanilla ATSS [53] as
the detector, DetCLIPv2 with Swin-T backbone achieves
40.4% zero-shot AP on the LVIS [17] benchmark, sur-
passing previous works GLIP [27]/GLIPv2 [52]/DetCLIP
[48] by 14.4/11.4/4.5% AP, respectively. DetCLIPv2 also
exhibits great generalization when transferring to down-
stream tasks, e.g., it achieves SoTA fine-tuning performance
on LVIS and ODinW13 [27]. We present a possibility
of achieving open-world detection by incorporating large-
scale image-text pairs and hope it will enlighten the commu-
nity to explore a similar successful trajectory to CLIP [33].
|
Xu_Q-DETR_An_Efficient_Low-Bit_Quantized_Detection_Transformer_CVPR_2023 | Abstract
The recent detection transformer (DETR) has ad-
vanced object detection, but its application on resource-
constrained devices requires massive computation and
memory resources. Quantization stands out as a solution
by representing the network in low-bit parameters and op-
erations. However, there is a significant performance drop
when performing low-bit quantized DETR (Q-DETR) with
existing quantization methods. We find that the bottle-
necks of Q-DETR come from the query information distor-
tion through our empirical analyses. This paper addresses
this problem based on a distribution rectification distillation
(DRD). We formulate our DRD as a bi-level optimization
problem, which can be derived by generalizing the informa-
tion bottleneck (IB) principle to the learning of Q-DETR.
At the inner level, we conduct a distribution alignment for
the queries to maximize the self-information entropy. At the
upper level, we introduce a new foreground-aware query
matching scheme to effectively transfer the teacher informa-
tion to distillation-desired features to minimize the condi-
tional information entropy. Extensive experimental results
show that our method performs much better than prior arts.
For example, the 4-bit Q-DETR can theoretically acceler-
ate DETR with ResNet-50 backbone by 6.6 ×and achieve
39.4% AP , with only 2.6% performance gaps than its real-
valued counterpart on the COCO dataset1.
| 1. Introduction
Inspired by the success of natural language processing
(NLP), object detection with transformers (DETR) has been
introduced to train an end-to-end detector via a transformer
encoder-decoder [4]. Unlike early works [22, 31] that often
employ convolutional neural networks (CNNs) and require
post-processing procedures, e.g., non-maximum suppres-
sion (NMS), and hand-designed sample selection, DETR
treats object detection as a direct set prediction problem.
†Equal contribution.
∗Corresponding author: [email protected]
1Code: https://github.com/SteveTsui/Q-DETR
decoder.5.co_attn.query
decoder.0.co_attn.query
decoder.2.co_attn.query
(b) 4-bit DETR-R50
(a) Real-valued DETR-R50
Figure 1. The histogram of query values q(blue shadow) and cor-
responding PDF curves (red curve) of Gaussian distribution [17],
w.r.t the cross attention of different decoder layers in (a) real-
valued DETR-R50, and (b) 4-bit quantized DETR-R50 (baseline).
Gaussian distribution is generated from the statistical mean and
variance of the query values. The query in quantized DETR-R50
bears information distortion compared with the real-valued one.
Experiments are performed on the VOC dataset [9].
Despite this attractiveness, DETR usually has a tremen-
dous number of parameters and float-pointing operations
(FLOPs). For instance, there are 39.8M parameters taking
up 159MB memory usage and 86G FLOPs in the DETR
model with ResNet-50 backbone [12] (DETR-R50). This
leads to an unacceptable memory and computation con-
sumption during inference, and challenges deployments on
devices with limited supplies of resources.
Therefore, substantial efforts on network compression
have been made towards efficient online inference [7, 33,
43, 44]. Quantization is particularly popular for deploying
on AI chips by representing a network in low-bit formats.
Yet prior post-training quantization (PTQ) for DETR [26]
derives quantized parameters from pre-trained real-valued
models, which often restricts the model performance in a
sub-optimized state due to the lack of fine-tuning on the
training data. In particular, the performance drastically
drops when quantized to ultra-low bits (4-bits or less). Al-
ternatively, quantization-aware training (QAT) [25, 42] per-
forms quantization and fine-tuning on the training dataset
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
3842
(b) 4-bit DETR-R50(a) Real-valued DETR-R50
Figure 2. Spatial attention weight maps in the last decoder of (a)
real-valued DETR-R50, and (b) 4-bit quantized DETR-R50. The
green rectangle denotes the ground-truth bounding box. Follow-
ing [29], the highlighted area denotes the large attention weights
in the selected four heads in compliance with bound predic-
tion. Compared to its real-valued counterpart that focuses on the
ground-truth bounds, quantized DETR-R50 deviates significantly.
simultaneously, leading to trivial performance degradation
even with significantly lower bits. Though QAT meth-
ods have been proven to be very effective in compressing
CNNs [8, 27] for computer vision tasks, an exploration of
low-bit DETR remains untouched.
In this paper, we first build a low-bit DETR baseline,
a straightforward solution based on common QAT tech-
niques [2]. Through an empirical study of this baseline,
we observe significant performance drops on the VOC [9]
dataset. For example, a 4-bit quantized DETR-R50 us-
ing LSQ [8] only achieves 76.9% AP 50, leaving a 6.4%
performance gaps compared with the real-valued DETR-
R50. We find that the incompatibility of existing QAT
methods mainly stems from the unique attention mecha-
nism in DETR, where the spatial dependencies are first con-
structed between the object queries and encoded features.
Then the co-attended object queries are fed into box coor-
dinates and class labels by a feed-forward network. A sim-
ple application of existing QAT methods on DETR leads to
query information distortion, and therefore the performance
severely degrades. Fig. 1 exhibits an example of informa-
tion distortion in query features of 4-bit DETR-R50, where
we can see significant distribution variation of the query
modules in quantized DETR and real-valued version. The
query information distortion causes the inaccurate focus of
spatial attention, which can be verified by following [29] to
visualize the spatial attention weight maps in 4-bit and real-
valued DETR-R50 in Fig. 2. We can see that the quantized
DETR-R50 bear’s inaccurate object localization. Therefore,
a more generic method for DETR quantization is necessary.
To tackle the issue above, we propose an efficient low-bit
quantized DETR (Q-DETR) by rectifying the query infor-
mation of the quantized DETR as that of the real-valued
counterpart. Fig. 3 provides an overview of our Q-DETR,
which is mainly accomplished by a distribution rectifica-tion knowledge distillation method (DRD). We find ineffec-
tive knowledge transferring from the real-valued teacher to
the quantized student primarily because of the information
gap and distortion. Therefore, we formulate our DRD as
a bi-level optimization framework established on the infor-
mation bottleneck principle (IB). Generally, it includes an
inner-level optimization to maximize the self-information
entropy of student queries and an upper-level optimization
to minimize the conditional information entropy between
student and teacher queries. At the inner level, we conduct a
distribution alignment for the query guided by its Gaussian-
alike distribution, as shown in Fig. 1, leading to an explicit
state in compliance with its maximum information entropy
in the forward propagation. At the upper level, we introduce
a new foreground-aware query matching that filters out low-
qualified student queries for exact one-to-one query match-
ing between student and teacher, providing valuable knowl-
edge gradients to push minimum conditional information
entropy in the backward propagation.
This paper attempts to introduce a generic method for
DETR quantization. The significant contributions in this
paper are outlined as follows: (1) We develop the first QAT
quantization framework for DETR, dubbed Q-DETR. (2)
We use a bi-level optimization distillation framework, ab-
breviated as DRD. (3) We observe a significant performance
increase compared to existing quantized baselines.
|