abstract
stringlengths 42
2.09k
|
---|
Remote sensing scene classification deals with the problem of classifying
land use/cover of a region from images. To predict the development and
socioeconomic structures of cities, the status of land use in regions are
tracked by the national mapping agencies of countries. Many of these agencies
use land use types that are arranged in multiple levels. In this paper, we
examined the efficiency of a hierarchically designed CNN based framework that
is suitable for such arrangements. We use NWPU-RESISC45 dataset for our
experiments and arranged this data set in a two level nested hierarchy. We have
two cascaded deep CNN models initiated using DenseNet-121 architectures. We
provide detailed empirical analysis to compare the performances of this
hierarchical scheme and its non hierarchical counterpart, together with the
individual model performances. We also evaluated the performance of the
hierarchical structure statistically to validate the presented empirical
results. The results of our experiments show that although individual
classifiers for different sub-categories in the hierarchical scheme perform
well, the accumulation of classification errors in the cascaded structure
prevents its classification performance from exceeding that of the non
hierarchical deep model.
|
The purpose of this paper is to define statistically convergent sequences
with respect to the metrics on generalized metric spaces (g-metric spaces) and
investigate basic properties of this statistical form of convergence.
|
Ethereum smart contracts are automated decentralized applications on the
blockchain that describe the terms of the agreement between buyers and sellers,
reducing the need for trusted intermediaries and arbitration. However, the
deployment of smart contracts introduces new attack vectors into the
cryptocurrency systems. In particular, programming flaws in smart contracts can
be and have already been exploited to gain enormous financial profits. It is
thus an emerging yet crucial issue to detect vulnerabilities of different
classes in contracts in an efficient manner. Existing machine learning-based
vulnerability detection methods are limited and only inspect whether the smart
contract is vulnerable, or train individual classifiers for each specific
vulnerability, or demonstrate multi-class vulnerability detection without
extensibility consideration. To overcome the scalability and generalization
limitations of existing works, we propose ESCORT, the first Deep Neural Network
(DNN)-based vulnerability detection framework for Ethereum smart contracts that
support lightweight transfer learning on unseen security vulnerabilities, thus
is extensible and generalizable. ESCORT leverages a multi-output NN
architecture that consists of two parts: (i) A common feature extractor that
learns the semantics of the input contract; (ii) Multiple branch structures
where each branch learns a specific vulnerability type based on features
obtained from the feature extractor. Experimental results show that ESCORT
achieves an average F1-score of 95% on six vulnerability types and the
detection time is 0.02 seconds per contract. When extended to new vulnerability
types, ESCORT yields an average F1-score of 93%. To the best of our knowledge,
ESCORT is the first framework that enables transfer learning on new
vulnerability types with minimal modification of the DNN model architecture and
re-training overhead.
|
Kinetic simulations and theory demonstrate that whistler waves can excite
oblique, short-wavelength fluctuations through secondary drift instabilities if
a population of sufficiently cold plasma is present. The excited modes lead to
heating of the cold populations and damping of the primary whistler waves. The
instability threshold depends on the density and temperature of the cold
population and can be relatively small if the temperature of the cold
population is sufficiently low. This mechanism may thus play a significant role
in controlling amplitude of whistlers in the regions of the Earth's
magnetosphere where cold background plasma of sufficient density is present.
|
Knowledge graph embedding has been an active research topic for knowledge
base completion (KGC), with progressive improvement from the initial TransE,
TransH, RotatE et al to the current state-of-the-art QuatE. However, QuatE
ignores the multi-faceted nature of the entity and the complexity of the
relation, only using rigorous operation on quaternion space to capture the
interaction between entitiy pair and relation, leaving opportunities for better
knowledge representation which will finally help KGC. In this paper, we propose
a novel model, QuatDE, with a dynamic mapping strategy to explicitly capture
the variety of relational patterns and separate different semantic information
of the entity, using transition vectors to adjust the point position of the
entity embedding vectors in the quaternion space via Hamilton product,
enhancing the feature interaction capability between elements of the triplet.
Experiment results show QuatDE achieves state-of-the-art performance on three
well-established knowledge graph completion benchmarks. In particular, the MR
evaluation has relatively increased by 26% on WN18 and 15% on WN18RR, which
proves the generalization of QuatDE.
|
One of the features of the unconventional $s_\pm$ state in iron-based
superconductors is possibility to transform to the $s_{++}$ state with the
increase of the nonmagnetic disorder. Detection of such a transition would
prove the existence of the $s_\pm$ state. Here we study the temperature
dependence of the London magnetic penetration depth within the two-band model
for the $s_\pm$ and $s_{++}$ superconductors. By solving Eliashberg equations
accounting for the spin-fluctuation mediated pairing and nonmagnetic impurities
in the $T$-matrix approximation, we have derived a set of specific signatures
of the $s_\pm \to s_{++}$ transition: (1) sharp change in the behavior of the
penetration depth $\lambda_{L}$ as a function of the impurity scattering rate
at low temperatures; (2) before the transition, the slope of $\Delta
\lambda_{L}(T) = \lambda_{L}(T)-\lambda_{L}(0)$ increases as a function of
temperature, and after the transition this value decreases; (3) the sharp jump
in the inverse square of the penetration depth as a function of the impurity
scattering rate, $\lambda_{L}^{-2}(\Gamma_a)$, at the transition; (4) change
from the single-gap behavior in the vicinity of the transition to the two-gap
behavior upon increase of the impurity scattering rate in the superfluid
density $\rho_{s}(T)$.
|
We extend applications of Furstenberg boundary theory to the study of
$C^*$-algebras associated to minimal actions $\Gamma\!\curvearrowright\! X$ of
discrete groups $\Gamma$ on locally compact spaces $X$. We introduce boundary
maps on $(\Gamma,X)$-$C^*$-algebras and investigate their applications in this
context. Among other results, we completely determine when $C^*$-algebras
generated by covariant representations arising from stabilizer subgroups are
simple. We also characterize the intersection property of locally compact
$\Gamma$-spaces and simplicity of their associated crossed products.
|
Deep Reinforcement Learning (RL) techniques can benefit greatly from
leveraging prior experience, which can be either self-generated or acquired
from other entities. Action advising is a framework that provides a flexible
way to transfer such knowledge in the form of actions between teacher-student
peers. However, due to the realistic concerns, the number of these interactions
is limited with a budget; therefore, it is crucial to perform these in the most
appropriate moments. There have been several promising studies recently that
address this problem setting especially from the student's perspective. Despite
their success, they have some shortcomings when it comes to the practical
applicability and integrity as an overall solution to the learning from advice
challenge. In this paper, we extend the idea of advice reusing via teacher
imitation to construct a unified approach that addresses both advice collection
and advice utilisation problems. We also propose a method to automatically tune
the relevant hyperparameters of these components on-the-fly to make it able to
adapt to any task with minimal human intervention. The experiments we performed
in 5 different Atari games verify that our algorithm either surpasses or
performs on-par with its top competitors while being far simpler to be
employed. Furthermore, its individual components are also found to be providing
significant advantages alone.
|
A new approach to deal with the scattering amplitudes in Glauber theory is
proposed. It relies on the use of generating function, that has been explicitly
found. The method is applied to the analytical calculation of the
nucleus-nucleus elastic scattering amplitudes in the all interaction orders of
the Glauber theory.
|
Neutral hydrogen (HI) intensity mapping is a promising technique to probe the
large-scale structure of the Universe, improving our understanding on the
late-time accelerated expansion. In this work, we first scrutinize how an
alternative cosmology, interacting Dark Energy, can affect the 21-cm angular
power spectrum relative to the concordance $\Lambda$CDM model. We re-derive the
21-cm brightness temperature fluctuation in the context of such interaction and
uncover an extra new contribution. Then we estimate the noise level of three
upcoming HI intensity mapping surveys, BINGO, SKA1-MID Band$\,$1 and Band$\,$2,
respectively, and employ a Fisher matrix approach to forecast their constraints
on the interacting Dark Energy model. We find that while $\textit{Planck}\,$
2018 maintains its dominion over early-Universe parameter constraints, BINGO
and SKA1-MID Band$\,$2 put complementary bounding to the latest CMB
measurements on dark energy equation of state $w$, the interacting strength
$\lambda_i$ and the reduced Hubble constant $h$, and SKA1-MID Band$\,$1 even
outperforms $\textit{Planck}\,$ 2018 in these late-Universe parameter
constraints. The expected minimum uncertainties are given by SKA1-MID
Band$\,$1+$\textit{Planck}\,$: $\sim 0.35\%$ on $w$, $\sim 0.27\%$ on $h$,
$\sim 0.61\%$ on HI bias $b_{\rm HI}$, and an absolute uncertainty of about
$3\times10^{-4}$ ($7\times10^{-4}$) on $\lambda_{1}$ ($\lambda_{2}$). Moreover,
we quantify the effect of increasing redshift bins and inclusion of
redshift-space distortions in updating the constraints. Our results indicate a
bright prospect for HI intensity mapping surveys in constraining interacting
Dark Energy, whether on their own or further by a joint analysis with other
measurements.
|
It is still an open and challenging problem for mobile robots navigating
along time-efficient and collision-free paths in a crowd. The main challenge
comes from the complex and sophisticated interaction mechanism, which requires
the robot to understand the crowd and perform proactive and foresighted
behaviors. Deep reinforcement learning is a promising solution to this problem.
However, most previous learning methods incur a tremendous computational
burden. To address these problems, we propose a graph-based deep reinforcement
learning method, SG-DQN, that (i) introduces a social attention mechanism to
extract an efficient graph representation for the crowd-robot state; (ii)
directly evaluates the coarse q-values of the raw state with a learned dueling
deep Q network(DQN); and then (iii) refines the coarse q-values via online
planning on possible future trajectories. The experimental results indicate
that our model can help the robot better understand the crowd and achieve a
high success rate of more than 0.99 in the crowd navigation task. Compared
against previous state-of-the-art algorithms, our algorithm achieves an
equivalent, if not better, performance while requiring less than half of the
computational cost.
|
Epidemic processes on random graphs or networks are marked by localization of
activity that can trap the dynamics into a metastable state, confined to a
subextensive part of the network, before visiting an absorbing configuration.
Quasistationary (QS) method is a technique to deal with absorbing states for
finite sizes and has played a central role in the investigation of epidemic
processes on heterogeneous networks where localization is a hallmark. The
standard QS method possesses high computer and algorithmic complexity for large
systems besides parameters whose choice are not systematic. However, simpler
approaches, such as a reflecting boundary condition (RBC), are not able to
capture the localization effects as the standard QS method does. In the present
work, we propose a QS method that consists of reactivating nodes proportionally
to the time they were active along the preceding simulation. The method is
compared with the standard QS and RBC methods for the
susceptible-infected-susceptible model on complex networks, which is a
prototype of a dynamic process with strong and localization effects. We
verified that the method performs as well the as standard QS in all
investigated simulations, providing the same scaling exponents, epidemic
thresholds, and localized phases, thus overcoming the limitations of other
simpler approaches. We also report that the present method has significant
reduction of the computer and algorithmic complexity than the standard QS
methods. So, this method arises as a simpler and efficient tool to analyze
localization on heterogeneous structures through QS simulations.
|
As 5th Generation research reaches the twilight, the research community must
go beyond 5G and look towards the 2030 connectivity landscape, namely 6G. In
this context, this work takes a step towards the 6G vision by proposing a next
generation communication platform, which aims to extend the rigid coverage area
of fixed deployment networks by considering virtual mobile small cells (MSC)
that are created on demand. Relying on emerging computing paradigms such as NFV
(Network Function Virtualization) and SDN (Software Defined Networking), these
cells can harness radio and networking capability locally reducing protocol
signaling latency and overhead. These MSCs constitute an intelligent pool of
networking resources that can collaborate to form a wireless network of MSCs
providing a communication platform for localized, ubiquitous and reliable
connectivity. The technology enablers for implementing the MSC concept are also
addressed in terms of virtualization, lightweight wireless security, and energy
efficient RF. The benefits of the MSC architecture towards reliable and
efficient cell offloading are demonstrated as a use-case.
|
In this study, 18F-FDG PET/CT brain scans of 50 patients with head and neck
malignant lesions were employed to systematically assess the relationship
between the amount of injected dose (10%, 8%, 6%, 5%, 4%, 3%, 2%, and 1% of
standard dose) and the image quality through measuring standard image quality
metrics (peak-signal-to-noise-ration (PSNR), structural similarity index
(SSIM), root mean square error (RMSE), and standard uptake value (SUV) bias)
for the whole head region as well as within the malignant lesions, considering
the standard-dose PET images as reference. Furthermore, we evaluated the impact
of post-reconstruction Gaussian filtering on the PET images in order to reduce
noise and improve the signal-to-noise ratio at different low-dose levels.
Significant degradation of PET image quality and tumor detectability was
observed with a decrease in the injected dose by more than 5%, leading to a
remarkable increase in RMSE from 0.173 SUV (at 5%) to 1.454 SUV (at 1%). The
quantitative investigation of the malignant lesions demonstrated that SUVmax
bias greatly increased in low-dose PET images (in particular at 1%, 2%, 3%
levels) before applying the post-reconstruction filter, while applying the
Gaussian filter on low-dose PET images led to a significant reduction in SUVmax
bias. The SUVmean bias within the malignant lesions was negligible (less than
1%) in low-dose PET images; however, this bias increased significantly after
applying the post-reconstruction filter. In conclusion, it is strongly
recommended that the SUVmax bias and SUVmean bias in low-dose PET images should
be considered prior to and following the application of the post-reconstruction
Guassain filter, respectively.
|
As an emerging and challenging problem in the computer vision community,
weakly supervised object localization and detection plays an important role for
developing new generation computer vision systems and has received significant
attention in the past decade. As methods have been proposed, a comprehensive
survey of these topics is of great importance. In this work, we review (1)
classic models, (2) approaches with feature representations from off-the-shelf
deep networks, (3) approaches solely based on deep learning, and (4) publicly
available datasets and standard evaluation metrics that are widely used in this
field. We also discuss the key challenges in this field, development history of
this field, advantages/disadvantages of the methods in each category, the
relationships between methods in different categories, applications of the
weakly supervised object localization and detection methods, and potential
future directions to further promote the development of this research field.
|
We prove an inequality bounding the renormalized area of a complete minimal
surface in hyperbolic space in terms of the conformal length of its ideal
boundary.
|
Automatic evaluation metrics are a crucial component of dialog systems
research. Standard language evaluation metrics are known to be ineffective for
evaluating dialog. As such, recent research has proposed a number of novel,
dialog-specific metrics that correlate better with human judgements. Due to the
fast pace of research, many of these metrics have been assessed on different
datasets and there has as yet been no time for a systematic comparison between
them. To this end, this paper provides a comprehensive assessment of recently
proposed dialog evaluation metrics on a number of datasets. In this paper, 23
different automatic evaluation metrics are evaluated on 10 different datasets.
Furthermore, the metrics are assessed in different settings, to better qualify
their respective strengths and weaknesses. Metrics are assessed (1) on both the
turn level and the dialog level, (2) for different dialog lengths, (3) for
different dialog qualities (e.g., coherence, engaging), (4) for different types
of response generation models (i.e., generative, retrieval, simple models and
state-of-the-art models), (5) taking into account the similarity of different
metrics and (6) exploring combinations of different metrics. This comprehensive
assessment offers several takeaways pertaining to dialog evaluation metrics in
general. It also suggests how to best assess evaluation metrics and indicates
promising directions for future work.
|
We present best practices and tools for professionals who support
computational and data intensive (CDI) research projects. The practices
resulted from an initiative that brings together national projects and
university teams that include individual or groups of such professionals. We
focus particularly on practices that differ from those in a general software
engineering context. The paper also describes the initiative , the Xpert
Network , where participants exchange successes, challenges, and general
information about their activities, leading to increased productivity,
efficiency, and coordination in the ever growing community of scientists that
use computational and data-intensive research methods.
|
Neural architecture search (NAS) has been successfully applied to tasks like
image classification and language modeling for finding efficient
high-performance network architectures. In ASR field especially end-to-end ASR,
the related research is still in its infancy. In this work, we focus on
applying NAS on the most popular manually designed model: Conformer, and then
propose an efficient ASR model searching method that benefits from the natural
advantage of differentiable architecture search (Darts) in reducing
computational overheads. We fuse Darts mutator and Conformer blocks to form a
complete search space, within which a modified architecture called
Darts-Conformer cell is found automatically. The entire searching process on
AISHELL-1 dataset costs only 0.7 GPU days. Replacing the Conformer encoder by
stacking searched cell, we get an end-to-end ASR model (named as
Darts-Conformner) that outperforms the Conformer baseline by 4.7\% on the
open-source AISHELL-1 dataset. Besides, we verify the transferability of the
architecture searched on a small dataset to a larger 2k-hour dataset. To the
best of our knowledge, this is the first successful attempt to apply
gradient-based architecture search in the attention-based encoder-decoder ASR
model.
|
We review state-of-the-art formal methods applied to the emerging field of
the verification of machine learning systems. Formal methods can provide
rigorous correctness guarantees on hardware and software systems. Thanks to the
availability of mature tools, their use is well established in the industry,
and in particular to check safety-critical applications as they undergo a
stringent certification process. As machine learning is becoming more popular,
machine-learned components are now considered for inclusion in critical
systems. This raises the question of their safety and their verification. Yet,
established formal methods are limited to classic, i.e. non machine-learned
software. Applying formal methods to verify systems that include machine
learning has only been considered recently and poses novel challenges in
soundness, precision, and scalability.
We first recall established formal methods and their current use in an
exemplar safety-critical field, avionic software, with a focus on abstract
interpretation based techniques as they provide a high level of scalability.
This provides a golden standard and sets high expectations for machine learning
verification. We then provide a comprehensive and detailed review of the formal
methods developed so far for machine learning, highlighting their strengths and
limitations. The large majority of them verify trained neural networks and
employ either SMT, optimization, or abstract interpretation techniques. We also
discuss methods for support vector machines and decision tree ensembles, as
well as methods targeting training and data preparation, which are critical but
often neglected aspects of machine learning. Finally, we offer perspectives for
future research directions towards the formal verification of machine learning
systems.
|
Zeroth-order (ZO, also known as derivative-free) methods, which estimate the
gradient only by two function evaluations, have attracted much attention
recently because of its broad applications in machine learning community. The
two function evaluations are normally generated with random perturbations from
standard Gaussian distribution. To speed up ZO methods, many methods, such as
variance reduced stochastic ZO gradients and learning an adaptive Gaussian
distribution, have recently been proposed to reduce the variances of ZO
gradients. However, it is still an open problem whether there is a space to
further improve the convergence of ZO methods. To explore this problem, in this
paper, we propose a new reinforcement learning based ZO algorithm (ZO-RL) with
learning the sampling policy for generating the perturbations in ZO
optimization instead of using random sampling. To find the optimal policy, an
actor-critic RL algorithm called deep deterministic policy gradient (DDPG) with
two neural network function approximators is adopted. The learned sampling
policy guides the perturbed points in the parameter space to estimate a more
accurate ZO gradient. To the best of our knowledge, our ZO-RL is the first
algorithm to learn the sampling policy using reinforcement learning for ZO
optimization which is parallel to the existing methods. Especially, our ZO-RL
can be combined with existing ZO algorithms that could further accelerate the
algorithms. Experimental results for different ZO optimization problems show
that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by
learning a sampling policy, and converge faster than existing ZO algorithms in
different scenarios.
|
In this work, we investigate the role of multi-nucleon (MN) effects (mainly
2p-2h and RPA) on the sensitivity measurement of various neutrino oscillation
parameters, in the disappearance channel of NO$\nu$A (USA) experiment.
Short-range correlations and detector effects have also been included in the
analysis. We use the kinematical method of reconstruction of the incoming
neutrino energy, both at the near and far detectors. The extrapolation
technique has been used to estimate oscillated events at the far detector. The
latest global best fit values of various light neutrino oscillation parameters
have been used in the analysis. We find that MN effects increase uncertainty in
the measurement of neutrino oscillation parameters, while lower detector
efficiency is reflected in more uncertainty. This study can give useful insight
into precision studies at long-baseline neutrino experiments in future
measurements.
|
Predicting future frames of video sequences is challenging due to the complex
and stochastic nature of the problem. Video prediction methods based on
variational auto-encoders (VAEs) have been a great success, but they require
the training data to contain multiple possible futures for an observed video
sequence. This is hard to be fulfilled when videos are captured in the wild
where any given observation only has a determinate future. As a result,
training a vanilla VAE model with these videos inevitably causes posterior
collapse. To alleviate this problem, we propose a novel VAE structure, dabbed
VAE-in-VAE or VAE$^2$. The key idea is to explicitly introduce stochasticity
into the VAE. We treat part of the observed video sequence as a random
transition state that bridges its past and future, and maximize the likelihood
of a Markov Chain over the video sequence under all possible transition states.
A tractable lower bound is proposed for this intractable objective function and
an end-to-end optimization algorithm is designed accordingly. VAE$^2$ can
mitigate the posterior collapse problem to a large extent, as it breaks the
direct dependence between future and observation and does not directly regress
the determinate future provided by the training data. We carry out experiments
on a large-scale dataset called Cityscapes, which contains videos collected
from a number of urban cities. Results show that VAE$^2$ is capable of
predicting diverse futures and is more resistant to posterior collapse than the
other state-of-the-art VAE-based approaches. We believe that VAE$^2$ is also
applicable to other stochastic sequence prediction problems where training data
are lack of stochasticity.
|
Bone is mineralized tissue constituting the skeletal system, supporting and
protecting body organs and tissues. At the molecular level, mineralized
collagen fibril is the basic building block of bone tissue, and hence,
understanding bone properties down to fundamental tissue structures enables to
better identify the mechanisms of structural failures and damages. While
efforts have focused on the study of the micro- and macro-scale viscoelasticity
related to bone damage and healing based on creep, mineralized collagen has not
been explored on a molecular level. We report a study that aims at
systematically exploring the viscoelasticity of collagenous fibrils with
different mineralization levels. We investigate the dynamic mechanical response
upon cyclic and impulsive loads to observe the viscoelastic phenomena from
either shear or extensional strains via molecular dynamics. We perform a
sensitivity analysis with several key benchmarks: intrafibrillar mineralization
percentage, hydration state, and external load amplitude. Our results show a
growth of the dynamic moduli with an increase of mineral percentage, pronounced
at low strains. When intrafibrillar water is present, the material softens the
elastic component but considerably increases its viscosity, especially at high
frequencies. This behaviour is confirmed from the material response upon
impulsive loads, in which water drastically reduces the relaxation times
throughout the input velocity range by one order of magnitude, with respect to
the dehydrated counterparts. We find that upon transient loads, water has a
major impact on the mechanics of mineralized fibrillar collagen, being able to
improve the capability of the tissue to passively and effectively dissipate
energy, especially after fast and high-amplitude external loads.
|
Determining whether two particle systems are similar is a common problem in
particle simulations. When the comparison should be invariant under
permutations, orthogonal transformations, and translations of the systems,
special techniques are needed. We present an algorithm that can test particle
systems of finite size for similarity and, if they are similar, can find the
optimal alignment between them. Our approach is based on an invariant version
of the root mean square deviation (RMSD) measure and is capable of finding the
globally optimal solution in $O(n^3)$ operations where $n$ is the number of
three-dimensional particles.
|
As observations of the Hubble parameter from both early and late sources have
improved, the tension between these has increased to be well above the
5$\sigma$ threshold. Given this, the need for an explanation of such a tension
has grown. In this paper, we explore a set of 7 assumptions, and show that, in
order to alleviate the Hubble tension, a model needs to break at least one of
these 7, providing a quick and easy to apply check for new model proposals. We
also use this framework to make a rough categorisation of current proposed
models, and show the existence of at least one under-explored avenue of
alleviating the Hubble tension.
|
Humans express feelings or emotions via different channels. Take language as
an example, it entails different sentiments under different visual-acoustic
contexts. To precisely understand human intentions as well as reduce the
misunderstandings caused by ambiguity and sarcasm, we should consider
multimodal signals including textual, visual and acoustic signals. The crucial
challenge is to fuse different modalities of features for sentiment analysis.
To effectively fuse the information carried by different modalities and better
predict the sentiments, we design a novel multi-head attention based fusion
network, which is inspired by the observations that the interactions between
any two pair-wise modalities are different and they do not equally contribute
to the final sentiment prediction. By assigning the acoustic-visual,
acoustic-textual and visual-textual features with reasonable attention and
exploiting a residual structure, we attend to attain the significant features.
We conduct extensive experiments on four public multimodal datasets including
one in Chinese and three in English. The results show that our approach
outperforms the existing methods and can explain the contributions of bimodal
interaction in multiple modalities.
|
The article considers a two-level open quantum system, whose evolution is
governed by the Gorini--Kossakowski--Lindblad--Sudarshan master equation with
Hamiltonian and dissipation superoperator depending, correspondingly, on
piecewise constant coherent and incoherent controls with constrained
magnitudes. Additional constraints on controls' variations are also considered.
The system is analyzed using Bloch parametrization of the system's density
matrix. We adapt the section method for obtaining outer parallelepipedal and
pointwise estimations of reachable and controllability sets in the Bloch ball
via solving a number of problems for optimizing coherent and incoherent
controls with respect to some objective criteria. The differential evolution
and dual annealing optimization methods are used. The numerical results show
how the reachable sets' estimations depend on distances between the system's
initial states and the Bloch ball's center point, final times, constraints on
controls' magnitudes and variations.
|
The COVID-19 pandemic due to the novel coronavirus SARS CoV-2 has inspired
remarkable breakthroughs in development of vaccines against the virus and the
launch of several phase 3 vaccine trials in Summer 2020 to evaluate vaccine
efficacy (VE). Trials of vaccine candidates using mRNA delivery systems
developed by Pfizer-BioNTech and Moderna have shown substantial VEs of 94-95%,
leading the US Food and Drug Administration to issue Emergency Use
Authorizations and subsequent widespread administration of the vaccines. As the
trials continue, a key issue is the possibility that VE may wane over time.
Ethical considerations dictate that all trial participants be unblinded and
those randomized to placebo be offered vaccine, leading to trial protocol
amendments specifying unblinding strategies. Crossover of placebo subjects to
vaccine complicates inference on waning of VE. We focus on the particular
features of the Moderna trial and propose a statistical framework based on a
potential outcomes formulation within which we develop methods for inference on
whether or not VE wanes over time and estimation of VE at any post-vaccination
time. The framework clarifies assumptions made regarding individual- and
population-level phenomena and acknowledges the possibility that subjects who
are more or less likely to become infected may be crossed over to vaccine
differentially over time. The principles of the framework can be adapted
straightforwardly to other trials.
|
This Report provides an extensive review of the experimental programme of
direct detection searches of particle dark matter. It focuses mostly on
European efforts, both current and planned, but does it within a broader
context of a worldwide activity in the field. It aims at identifying the
virtues, opportunities and challenges associated with the different
experimental approaches and search techniques. It presents scientific and
technological synergies, both existing and emerging, with some other areas of
particle physics, notably collider and neutrino programmes, and beyond. It
addresses the issue of infrastructure in light of the growing needs and
challenges of the different experimental searches. Finally, the Report makes a
number of recommendations from the perspective of a long-term future of the
field. They are introduced, along with some justification, in the opening
Overview and Recommendations section and are next summarised at the end of the
Report. Overall, we recommend that the direct search for dark matter particle
interactions with a detector target should be given top priority in
astroparticle physics, and in all particle physics, and beyond, as a positive
measurement will provide the most unambiguous confirmation of the particle
nature of dark matter in the Universe.
|
We present an analysis of the spatial clustering of 695 Ly$\alpha$-emitting
galaxies (LAE) in the MUSE-Wide survey. All objects have spectroscopically
confirmed redshifts in the range $3.3<z<6$. We employ the K-estimator of
Adelberger et al. (2005), adapted and optimized for our sample. We also explore
the standard two-point correlation function approach, which is however less
suited for a pencil-beam survey such as ours. The results from both approaches
are consistent. We parametrize the clustering properties by, (i) modelling the
clustering signal with a power law (PL), and (ii) adopting a Halo Occupation
Distribution (HOD) model. Applying HOD modeling, we infer a large-scale bias of
$b_{\rm{HOD}}=2.80^{+0.38}_{-0.38}$ at a median redshift of the number of
galaxy pairs $\langle z_{\rm pair}\rangle\simeq3.82$, while the PL analysis
results in $b_{\rm{PL}}=3.03^{+1.51}_{-0.52}$
($r_0=3.60^{+3.10}_{-0.90}\;h^{-1}$Mpc and $\gamma=1.30^{+0.36}_{-0.45}$). The
implied typical dark matter halo (DMH) mass is
$\log(M_{\rm{DMH}}/[h^{-1}\rm{M}_\odot])=11.34^{+0.23}_{-0.27}$. We study
possible dependencies of the clustering signal on object properties by
bisecting the sample into disjoint subsets, considering Ly$\alpha$ luminosity,
UV absolute magnitude, Ly$\alpha$ equivalent width, and redshift as variables.
We find a suggestive trend of more luminous Ly$\alpha$ emitters residing in
more massive DMHs than their lower Ly$\alpha$ luminosity counterparts. We also
compare our results to mock LAE catalogs based on a semi-analytic model of
galaxy formation and find a stronger clustering signal than in our observed
sample. By adopting a galaxy-conserving model we estimate that the LAEs in the
MUSE-Wide survey will typically evolve into galaxies hosted by halos of
$\log(M_{\rm{DMH}}/[h^{-1}\rm{M}_\odot])\approx13.5$ at redshift zero,
suggesting that we observe the ancestors of present-day galaxy groups.
|
We consider the reflection seismology problem of recovering the locations of
interfaces and the amplitudes of reflection coefficients from seismic data,
which are vital for estimating the subsurface structure. The reflectivity
inversion problem is typically solved using greedy algorithms and iterative
techniques. Sparse Bayesian learning framework, and more recently, deep
learning techniques have shown the potential of data-driven approaches to solve
the problem. In this paper, we propose a weighted minimax-concave
penalty-regularized reflectivity inversion formulation and solve it through a
model-based neural network. The network is referred to as deep-unfolded
reflectivity inversion network (DuRIN). We demonstrate the efficacy of the
proposed approach over the benchmark techniques by testing on synthetic 1-D
seismic traces and 2-D wedge models and validation with the simulated 2-D
Marmousi2 model and real data from the Penobscot 3D survey off the coast of
Nova Scotia, Canada.
|
We show that all the semi-smooth stable complex Godeaux surfaces, classified
in [FPR18a], are smoothable,
and that the moduli stack is smooth of the expected dimension 8 at the
corresponding points.
|
Machine learning on encrypted data can address the concerns related to
privacy and legality of sharing sensitive data with untrustworthy service
providers. Fully Homomorphic Encryption (FHE) is a promising technique to
enable machine learning and inferencing while providing strict guarantees
against information leakage. Since deep convolutional neural networks (CNNs)
have become the machine learning tool of choice in several applications,
several attempts have been made to harness CNNs to extract insights from
encrypted data. However, existing works focus only on ensuring data security
and ignore security of model parameters. They also report high level
implementations without providing rigorous analysis of the accuracy, security,
and speed trade-offs involved in the FHE implementation of generic primitive
operators of a CNN such as convolution, non-linear activation, and pooling. In
this work, we consider a Machine Learning as a Service (MLaaS) scenario where
both input data and model parameters are secured using FHE. Using the CKKS
scheme available in the open-source HElib library, we show that operational
parameters of the chosen FHE scheme such as the degree of the cyclotomic
polynomial, depth limitations of the underlying leveled HE scheme, and the
computational precision parameters have a major impact on the design of the
machine learning model (especially, the choice of the activation function and
pooling method). Our empirical study shows that choice of aforementioned design
parameters result in significant trade-offs between accuracy, security level,
and computational time. Encrypted inference experiments on the MNIST dataset
indicate that other design choices such as ciphertext packing strategy and
parallelization using multithreading are also critical in determining the
throughput and latency of the inference process.
|
The Spin Physics Detector, a universal facility for studying the nucleon spin
structure and other spin-related phenomena with polarized proton and deuteron
beams, is proposed to be placed in one of the two interaction points of the
NICA collider that is under construction at the Joint Institute for Nuclear
Research (Dubna, Russia). At the heart of the project there is huge experience
with polarized beams at JINR.
The main objective of the proposed experiment is the comprehensive study of
the unpolarized and polarized gluon content of the nucleon. Spin measurements
at the Spin Physics Detector at the NICA collider have bright perspectives to
make a unique contribution and challenge our understanding of the spin
structure of the nucleon. In this document the Conceptual Design of the Spin
Physics Detector is presented.
|
We study the measurement of Higgs boson self-couplings through $2\rightarrow
3$ vector boson scattering (VBS) processes in the framework of Standard Model
effective field theory (SMEFT) at both proton and lepton colliders. The SMEFT
contribution to the amplitude of the $2\to3$ VBS processes, taking $W_L
W_L\rightarrow W_L W_L h$ and $W_L W_L\rightarrow h h h$ as examples, exhibits
enhancement with the energy
$\frac{\mathcal{A}^{\text{BSM}}}{\mathcal{A}^{\text{SM}}} \sim
\frac{E^2}{\Lambda^2}$, which indicates the sensitivity of these processes to
the related dimension-six operators in SMEFT. Simulation of the full processes
at both hadron and lepton colliders with a variety of collision energies are
performed to estimate the allowed region on $c_6$ and $c_{\Phi_1}$. Especially
we find that, with the help of exclusively choosing longitudinal polarizations
in the final states and suitable $p_T$ cuts, $WWh$ process is as important as
the more widely studied triple Higgs production ($hhh$) in the measurement of
Higgs self-couplings. Our analysis indicates that these processes can play
important roles in the measurement of Higgs self-couplings at future 100 TeV pp
colliders and muon colliders. However, their cross sections are generally tiny
at low energy machines, which makes them much more challenging to explore.
|
The proposition that life can spread from one planetary system to another
(interstellar panspermia) has a long history, but this hypothesis is difficult
to test through observations. We develop a mathematical model that takes
parameters such as the microbial survival lifetime, the stellar velocity
dispersion, and the dispersion of ejecta into account in order to assess the
prospects for detecting interstellar panspermia. We show that the correlations
between pairs of life-bearing planetary systems (embodied in the
pair-distribution function from statistics) may serve as an effective
diagnostic of interstellar panspermia, provided that the velocity dispersion of
ejecta is greater than the stellar dispersion. We provide heuristic estimates
of the model parameters for various astrophysical environments, and conclude
that open clusters and globular clusters appear to represent the best targets
for assessing the viability of interstellar panspermia.
|
In this work, we present a number of generator matrices of the form $[I_{2n}
\ | \ \tau_k(v)],$ where $I_{kn}$ is the $kn \times kn$ identity matrix, $v$ is
an element in the group matrix ring $M_2(R)G$ and where $R$ is a finite
commutative Frobenius ring and $G$ is a finite group of order 18. We employ
these generator matrices and search for binary $[72,36,12]$ self-dual codes
directly over the finite field $\mathbb{F}_2.$ As a result, we find 134 Type I
and 1 Type II codes of this length, with parameters in their weight enumerators
that were not known in the literature before. We tabulate all of our findings.
|
Multiexcitons in monolayer WSe2 exhibit a suite of optoelectronic phenomena
that are unique to those of their single exciton constituents. Here,
photoluminescence action spectroscopy shows that multiexciton formation is
enhanced with increasing optical excitation energy. This enhancement is
attributed to the multiexciton formation processes from an electron-hole plasma
and results in over 300% more multiexciton emission than at lower excitation
energies at 4 K. The energetic onset of the enhancement coincides with the
quasiparticle bandgap, corroborating the role of the electron-hole plasma, and
the enhancement diminishes with increasing temperature. The results reveal that
the strong interactions responsible for ultrafast exciton formation also affect
multiexciton phenomena, and both multiexciton and single exciton states play
significant roles in plasma thermalization in 2D semiconductors.
|
The recently discovered non-Hermitian skin effect (NHSE) manifests the
breakdown of current classification of topological phases in
energy-nonconservative systems, and necessitates the introduction of
non-Hermitian band topology. So far, all NHSE observations are based on one
type of non-Hermitian band topology, in which the complex energy spectrum winds
along a closed loop. As recently characterized along a synthetic dimension on a
photonic platform, non-Hermitian band topology can exhibit almost arbitrary
windings in momentum space, but their actual phenomena in real physical systems
remain unclear. Here, we report the experimental realization of NHSE in a
one-dimensional (1D) non-reciprocal acoustic crystal. With direct acoustic
measurement, we demonstrate that a twisted winding, whose topology consists of
two oppositely oriented loops in contact rather than a single loop, will
dramatically change the NHSE, following previous predictions of unique features
such as the bipolar localization and the Bloch point for a Bloch-wave-like
extended state. This work reveals previously unnoticed features of NHSE, and
provides the observation of physical phenomena originating from complex
non-Hermitian winding topology.
|
Neuroevolutionary algorithms, automatic searches of neural network structures
by means of evolutionary techniques, are computationally costly procedures. In
spite of this, due to the great performance provided by the architectures which
are found, these methods are widely applied. The final outcome of
neuroevolutionary processes is the best structure found during the search, and
the rest of the procedure is commonly omitted in the literature. However, a
good amount of residual information consisting of valuable knowledge that can
be extracted is also produced during these searches. In this paper, we propose
an approach that extracts this information from neuroevolutionary runs, and use
it to build a metamodel that could positively impact future neural architecture
searches. More specifically, by inspecting the best structures found during
neuroevolutionary searches of generative adversarial networks with varying
characteristics (e.g., based on dense or convolutional layers), we propose a
Bayesian network-based model which can be used to either find strong neural
structures right away, conveniently initialize different structural searches
for different problems, or help future optimization of structures of any type
to keep finding increasingly better structures where uninformed methods get
stuck into local optima.
|
This study analyzes the case of Romanian births, jointly distributed by
age-groups of mother and father covering 1958-2019 under the potential
influence of significant disruptors. Significant events such as anti-abortion
laws application or abrogation, communism fall, and migration and their impact
are analyzed. While in practice we may find pro and contra examples, a general
controversy arises regarding whether births should or should not obey the
Benford Law (BL). Moreover, the significant disruptors' impacts are not
detailed discussed in such analysis. I find the distribution of births is First
Digit Benford Law (BL1) conformant on the entire sample, but mixed results
regarding the BL obedience in the dynamic analysis and by main sub-periods.
Even though many disruptors are analyzed, only the 1967 Anti-abortion Decree
has a significant impact. I capture an average lag of 15 years between the
event, the Anti-abortion Decree, and the start of distortion of the births
distribution. The distortion persists around 25 years, almost the entire
fertility life (15 to 39) for the majority of the people from the cohorts born
in 1967-1968.
|
We implement an algorithm for the computation of Schouten bracket of weakly
nonlocal Hamiltonian operators in three different computer algebra systems:
Maple, Reduce and Mathematica. This class of Hamiltonian operators encompass
almost all the examples coming from the theory of (1+1)-integrable evolutionary
PDEs
|
We provide a rigorous analysis of the quantum optimal control problem in the
setting of a linear combination $s(t)B+(1-s(t))C$ of two noncommuting
Hamiltonians $B$ and $C$. This includes both quantum annealing (QA) and the
quantum approximate optimization algorithm (QAOA). The target is to minimize
the energy of the final ``problem'' Hamiltonian $C$, for a time-dependent and
bounded control schedule $s(t)\in [0,1]$ and $t\in \mc{I}:= [0,t_f]$. It was
recently shown, in a purely closed system setting, that the optimal solution to
this problem is a ``bang-anneal-bang'' schedule, with the bangs characterized
by $s(t)= 0$ and $s(t)= 1$ in finite subintervals of $\mc{I}$, in particular
$s(0)=0$ and $s(t_f)=1$, in contrast to the standard prescription $s(0)=1$ and
$s(t_f)=0$ of quantum annealing. Here we extend this result to the open system
setting, where the system is described by a density matrix rather than a pure
state. This is the natural setting for experimental realizations of QA and
QAOA. For finite-dimensional environments and without any approximations we
identify sufficient conditions ensuring that either the bang-anneal,
anneal-bang, or bang-anneal-bang schedules are optimal, and recover the
optimality of $s(0)=0$ and $s(t_f)=1$. However, for infinite-dimensional
environments and a system described by an adiabatic Redfield master equation we
do not recover the bang-type optimal solution. In fact we can only identify
conditions under which $s(t_f)=1$, and even this result is not recovered in the
fully Markovian limit. The analysis, which we carry out entirely within the
geometric framework of Pontryagin Maximum Principle, simplifies using the
density matrix formulation compared to the state vector formulation.
|
We identify the exact microscopic structure of the G photoluminescence center
in silicon by first principles calculations with including a self-consistent
many-body perturbation method, which is a telecommunication wavelength single
photon source. The defect constitutes of $\text{C}_\text{s}\text{C}_\text{i}$
carbon impurities in its
$\text{C}_\text{s}-\text{Si}_\text{i}-\text{C}_\text{s}$ configuration in the
neutral charge state, where $s$ and $i$ stand for the respective substitutional
and interstitial positions in the Si lattice. We reveal that the observed fine
structure of its optical signals originates from the athermal rotational
reorientation of the defect. We attribute the monoclinic symmetry reported in
optically detected magnetic resonance measurements to the reduced tunneling
rate at very low temperatures. We discuss the thermally activated motional
averaging of the defect properties and the nature of the qubit state.
|
Transition metal dichalcogenides (TMDs) have been a core constituent of 2D
material research throughout the last decade. Over this time, research focus
has progressively shifted from synthesis and fundamental investigations, to
exploring their properties for applied research such as electrochemical
applications and integration in electrical devices. Due to the rapid pace of
development, priority is often given to application-oriented aspects while
careful characterisation and analysis of the TMD materials themselves is
occasionally neglected. This can be particularly evident for characterisations
involving X-ray photoelectron spectroscopy (XPS), where measurement,
peak-fitting, and analysis can be complex and nuanced endeavours requiring
specific expertise. To improve the availability and accessibility of reference
information, here we present a detailed peak-fitted XPS analysis of ten
transition metal chalcogenides. The materials were synthesised as large-area
thin-films on SiO2 using direct chalcogenisation of pre-deposited metal films.
Alongside XPS, the Raman spectra with several excitation wavelengths for each
material are also provided. These complementary characterisation methods can
provide a more complete understanding of the composition and quality of the
material. As material stability is a crucial factor when considering
applications, the in-air thermal stability of the TMDs was investigated after
several annealing points up to 400 {\deg}C. This delivers a trend of evolving
XPS and Raman spectra for each material which improves interpretation of their
spectra while also indicating their ambient thermal limits. This provides an
accessible library and set of guidelines to characterise, compare, and discuss
TMD XPS and Raman spectra.
|
The properties of ideal tri-functional dendrimers with forty-five,
ninety-three and one hundred and eighty-nine branches are investigated. Three
methods are employed to calculate the mean-square radius of gyration,
$g$-ratios, asphericity, shape parameters and form factor. These methods
include a Kirchhoff matrix eigenvalue technique, the graph theory approach of
Benhamou et al. (2004), and Monte Carlo simulations using a growth algorithm. A
novel technique for counting paths in the graph representation of the
dendrimers is presented. All the methods are in excellent agreement with each
other and with available theoretical predictions. Dendrimers become more
symmetrical as the generation and the number of branches increase.
|
The colonisation of a soft passive material by motile cells such as bacteria
is common in biology. The resulting colonies of the invading cells are often
observed to exhibit intricate patterns whose morphology and dynamics can depend
on a number of factors, particularly the mechanical properties of the substrate
and the motility of the individual cells. We use simulations of a minimal 2D
model of self-propelled rods moving through with a passive compliant medium
consisting of particles that offer elastic resistance before being plastically
displaced from their equilibrium positions. It is observed that the
motility-induced clustering of active (self-propelled) particles is crucial for
understanding the morphodynamics of colonisation. Clustering enables motile
colonies to spread faster than they would have as isolated particles. The
colonisation rate depends non-monotonically on substrate stiffness with a
distinct maximum at a non-zero value of substrate stiffness. This is observed
to be due to a change in the morphology of clusters. Furrow networks created by
the active particles have a fractal-like structure whose dimension varies
systematically with substrate stiffness but is less sensitive to particle
activity. The power-law growth exponent of the furrowed area is smaller than
unity, suggesting that, to sustain such extensive furrow networks, colonies
must regulate their overall growth rate.
|
We use exponent pairs to establish the existence of many $x^a$-smooth numbers
in short intervals $[x-x^b,x]$, when $a>1/2$. In particular, $b=1-a-a(1-a)^3$
is admissible. Assuming the exponent-pairs conjecture, one can take
$b=(1-a)/2+\epsilon$. As an application, we show that $[x-x^{0.4872},x]$
contains many practical numbers when $x$ is large.
|
Dust attenuation of an inclined galaxy can cause additional asymmetries in
observations, even if the galaxy has a perfectly symmetric structure. {Taking
advantage of the integral field spectroscopic data observed by the SDSS-IV
MaNGA survey, we investigate the asymmetries of the emission-line and continuum
maps of star-forming disk galaxies.} We define new parameters, $A_a$ and $A_b$,
to estimate the asymmetries of a galaxy about its major and minor axes,
respectively. Comparing $A_a$ and $A_b$ in different inclination bins, we
attempt to detect the asymmetries caused by dust. For the continuum images, we
find that $A_a$ increases with the inclination, while the $A_b$ is a constant
as inclination changes. Similar trends are found for $g-r$, $g-i$ and $r-i$
color images. The dependence of the asymmetry on inclination suggests a thin
dust layer with a scale height smaller than the stellar populations. For the
H$\alpha$ and H$\beta$ images, neither $A_a$ nor $A_b$ shows a significant
correlation with inclination. Also, we do not find any significant dependence
of the asymmetry of $E(B-V)_g$ on inclination, implying that the dust in the
thick disk component is not significant. Compared to the SKIRT simulation, the
results suggest that the thin dust disk has an optical depth $\tau_V\sim0.2$.
This is the first time that the asymmetries caused by the dust attenuation and
the inclination are probed statistically with a large sample. Our results
indicate that the combination of the dust attenuation and the inclination
effects is a potential indicator of the 3D disk orientation.
|
We present a novel method for sampling iso-likelihood contours in nested
sampling using a type of machine learning algorithm known as normalising flows
and incorporate it into our sampler nessai. Nessai is designed for problems
where computing the likelihood is computationally expensive and therefore the
cost of training a normalising flow is offset by the overall reduction in the
number of likelihood evaluations. We validate our sampler on 128 simulated
gravitational wave signals from compact binary coalescence and show that it
produces unbiased estimates of the system parameters. Subsequently, we compare
our results to those obtained with dynesty and find good agreement between the
computed log-evidences whilst requiring 2.07 times fewer likelihood
evaluations. We also highlight how the likelihood evaluation can be
parallelised in nessai without any modifications to the algorithm. Finally, we
outline diagnostics included in nessai and how these can be used to tune the
sampler's settings.
|
Recently, the LHCb Collaboration reported on the evidence for a hidden charm
pentaquark state with strangeness, i.e., $P_{cs}(4459)$, in the $J/\psi\Lambda$
invariant mass distribution of the $\Xi_b^-\to J/\psi \Lambda K^-$ decay. In
this work, assuming that $P_{cs}(4459)$ is a $\bar{D}^*\Xi_c$ molecular state,
we study this decay via triangle diagrams $\Xi_b\rightarrow
\bar{D}_s^{(*)}\Xi_c\to (\bar{D}^{(*)}\bar{K})\Xi_c\to P_{cs} \bar{K}\to
(J/\psi\Lambda) \bar{K}$. Our study shows that the production yield of a spin
3/2 $\bar{D}^*\Xi_c$ state is approximately one order of magnitude larger than
that of a spin $1/2$ state due to the interference of $\bar{D}_s\Xi_c$ and
$\bar{D}_s^*\Xi_c$ intermediate states. We obtain a model independent
constraint on the product of couplings $g_{P_{cs}\bar{D}^*\Xi_c}$ and
$g_{P_{cs}J/\psi\Lambda}$. With the predictions of two particular molecular
models as inputs, we calculate the branching ratio of $\Xi_b^-\to
(P_{cs}\to)J/\psi\Lambda K^- $ and compare it with the experimental
measurement. We further predict the lineshape of this decay which could be
useful to future experimental studies.
|
A graph $G$ is called a $2K_2$-free graph if it does not contain $2K_2$ as an
induced subgraph. In 2014, Broersma, Patel and Pyatkin showed that every
25-tough $2K_2$-free graph on at least three vertices is Hamiltonian. Recently,
Shan improved this result by showing that 3-tough is sufficient instead of
25-tough. In this paper, we show that every 2-tough $2K_2$-free graph on at
least three vertices is Hamiltonian, which was conjectured by Gao and
Pasechnik.
|
Deformable image registration is fundamental for many medical image analyses.
A key obstacle for accurate image registration is the variations in image
appearance. Recently, deep learning-based registration methods (DLRs), using
deep neural networks, have computational efficiency that is several orders of
magnitude greater than traditional optimization-based registration methods
(ORs). A major drawback, however, of DLRs is a disregard for the
target-pair-specific optimization that is inherent in ORs and instead they rely
on a globally optimized network that is trained with a set of training samples
to achieve faster registration. Thus, DLRs inherently have degraded ability to
adapt to appearance variations and perform poorly, compared to ORs, when image
pairs (fixed/moving images) have large differences in appearance. Hence, we
propose an Appearance Adjustment Network (AAN) where we leverage anatomy edges,
through an anatomy-constrained loss function, to generate an anatomy-preserving
appearance transformation. We designed the AAN so that it can be readily
inserted into a wide range of DLRs, to reduce the appearance differences
between the fixed and moving images. Our AAN and DLR's network can be trained
cooperatively in an unsupervised and end-to-end manner. We evaluated our AAN
with two widely used DLRs - Voxelmorph (VM) and FAst IMage registration (FAIM)
- on three public 3D brain magnetic resonance (MR) image datasets - IBSR18,
Mindboggle101, and LPBA40. The results show that DLRs, using the AAN, improved
performance and achieved higher results than state-of-the-art ORs.
|
Angle-resolved photoemission spectroscopy (ARPES) is one of the most powerful
experimental techniques in condensed matter physics. Synchrotron ARPES, which
uses photons with high flux and continuously tunable energy, has become
particularly important. However, an excellent synchrotron ARPES system must
have features such as a small beam spot, super-high energy resolution, and a
user-friendly operation interface. A synchrotron beamline and an endstation
(BL03U) were designed and constructed at the Shanghai Synchrotron Radiation
Facility. The beam spot size at the sample position is 7.5 (V) $\mu$m $\times$
67 (H) $\mu$m, and the fundamental photon range is 7-165 eV; the ARPES system
enables photoemission with an energy resolution of 2.67 [email protected] eV. In
addition, the ARPES system of this endstation is equipped with a six-axis
cryogenic sample manipulator (the lowest temperature is 7 K) and is integrated
with an oxide molecular beam epitaxy system and a scanning tunneling
microscope, which can provide an advanced platform for in-situ characterization
of the fine electronic structure of condensed matter.
|
Run-and-tumble particles, frequently considered today for modeling bacterial
locomotion, naturally appear outside a biological context as well, e.g. for
producing waves in the telegraph process. Here, we use a wave function to drive
their propulsion and tumbling. Such quantum-active motion realizes a jittery
motion of Dirac electrons (as in the famous Zitterbewegung): the Dirac electron
is a run-and-tumble particle, where the tumbling is between chiralities. We
visualize the trajectories in diffraction and double slit experiments for
electrons. In particular, that yields the time-of-arrival statistics of the
electrons at the screen. Finally, we observe that away from pure quantum
guidance, run-and-tumble particles with suitable spacetime-dependent parameters
produce an interference pattern as well.
|
We introduce a basis set consisting of three-dimensional Deslauriers--Dubuc
wavelets and numerically solve the Schr\"odinger equations of hydrogen atom,
helium atom, hydrogen molecule ion, hydrogen molecule, and lithium hydride
molecule with Hartree-Fock and DFT methods. We also compute the 2s and 2p
excited states of hydrogen. The Coulomb singularity at the nucleus is handled
by using a pseudopotential. Results are compared with those of CCCBDB and
BigDFT. The eigenvalue problem is solved with Arnoldi and Lanczos methods, and
the Poisson equation with GMRES and CGNR methods. The various matrix elements
are computed using the biorthogonality relations of the interpolating wavelets.
|
A ubiquitous challenge in design space exploration or uncertainty
quantification of complex engineering problems is the minimization of
computational cost. A useful tool to ease the burden of solving such systems is
model reduction. This work considers a stochastic model reduction method (SMR),
in the context of polynomial chaos (PC) expansions, where low-fidelity (LF)
samples are leveraged to form a stochastic reduced basis. The reduced basis
enables the construction of a bi-fidelity (BF) estimate of a quantity of
interest from a small number of high-fidelity (HF) samples. A successful BF
estimate approximates the quantity of interest with accuracy comparable to the
HF model and computational expense close to the LF model. We develop new error
bounds for the SMR approach and present a procedure to practically utilize
these bounds in order to assess the appropriateness of a given pair of LF and
HF models for BF estimation. The effectiveness of the SMR approach, and the
utility of the error bound are presented in three numerical examples.
|
Membranes derived from ultrathin polymeric films are promising to meet fast
separations, but currently available approaches to produce polymer films with
greatly reduced thicknesses on porous supports still faces challenges. Here,
defect-free ultrathin polymer covering films (UPCFs) are realized by a facile
general approach of rapid solvent evaporation. By fast evaporating dilute
polymer solutions, we realize ultrathin coating (~30 nm) of porous substrates
exclusively on the top surface, forming UPCFs with a block copolymer of
polystyrene-block-poly(2-vinyl pyridine) at room temperature or a homopolymer
of poly(vinyl alcohol) (PVA) at elevated temperatures. With subsequent
selective swelling to the block copolymer and crosslinking to PVA, the
resulting bi-layered composite structures serve as highly permeable membranes
delivering ~2-10 times higher permeability in ultrafiltration and pervaporation
applications than state-of-the-art separation membranes with similar rejections
and selectivities. This work opens up a new, facile avenue for the controllable
fabrication of ultrathin coatings on porous substrates, which shows great
potentials in membrane-based separations and other areas.
|
Semantic Scene Completion aims at reconstructing a complete 3D scene with
precise voxel-wise semantics from a single-view depth or RGBD image. It is a
crucial but challenging problem for indoor scene understanding. In this work,
we present a novel framework named Scene-Instance-Scene Network
(\textit{SISNet}), which takes advantages of both instance and scene level
semantic information. Our method is capable of inferring fine-grained shape
details as well as nearby objects whose semantic categories are easily
mixed-up. The key insight is that we decouple the instances from a coarsely
completed semantic scene instead of a raw input image to guide the
reconstruction of instances and the overall scene. SISNet conducts iterative
scene-to-instance (SI) and instance-to-scene (IS) semantic completion.
Specifically, the SI is able to encode objects' surrounding context for
effectively decoupling instances from the scene and each instance could be
voxelized into higher resolution to capture finer details. With IS,
fine-grained instance information can be integrated back into the 3D scene and
thus leads to more accurate semantic scene completion. Utilizing such an
iterative mechanism, the scene and instance completion benefits each other to
achieve higher completion accuracy. Extensively experiments show that our
proposed method consistently outperforms state-of-the-art methods on both real
NYU, NYUCAD and synthetic SUNCG-RGBD datasets. The code and the supplementary
material will be available at \url{https://github.com/yjcaimeow/SISNet}.
|
We prove that a system of equations introduced by Demailly (to attack a
conjecture of Griffiths) has a smooth solution for a direct sum of ample line
bundles on a Riemann surface. We also reduce the problem for general vector
bundles to an a priori estimate using Leray-Schauder degree theory.
|
We use $Gaia$ eDR3 data and legacy spectroscopic surveys to map the Milky Way
disc substructure towards the Galactic Anticenter at heliocentric distances
$d\geq10\,\rm{kpc}$. We report the discovery of multiple previously undetected
new filaments embedded in the outer disc in highly extincted regions. Stars in
these over-densities have distance gradients expected for disc material and
move on disc-like orbits with $v_{\phi}\sim170-230\,\rm{km\,s^{-1}}$, showing
small spreads in energy. Such a morphology argues against a quiescently growing
Galactic thin disc. Some of these structures are interpreted as excited outer
disc material, kicked up by satellite impacts and currently undergoing
phase-mixing ("feathers"). Due to the long timescale in the outer disc regions,
these structures can stay coherent in configuration space over several Gyrs. We
nevertheless note that some of these structures could also be folds in the
perturbed disc seen in projection from the Sun's location. A full 6D
phase-space characterization and age dating of these structure should help
distinguish between the two possible morphologies.
|
Scoring rules aggregate individual rankings by assigning some points to each
position in each ranking such that the total sum of points provides the overall
ranking of the alternatives. They are widely used in sports competitions
consisting of multiple contests. We study the tradeoff between two risks in
this setting: (1) the threat of early clinch when the title has been clinched
before the last contest(s) of the competition take place; (2) the danger of
winning the competition without finishing first in any contest. In particular,
four historical points scoring systems of the Formula One World Championship
are compared with the family of geometric scoring rules, recently proposed by
an axiomatic approach. The schemes used in practice are found to be competitive
with respect to these goals, and the current rule seems to be a reasonable
compromise close to the Pareto frontier. Our results shed more light on the
evolution of the Formula One points scoring systems and contribute to the issue
of choosing the set of point values.
|
We consider a stochastic model describing the spiking activity of a countable
set of neurons spatially organized into a homogeneous tree of degree $d$, $d
\geq 2$; the degree of a neuron is just the number of connections it has.
Roughly, the model is as follows. Each neuron is represented by its membrane
potential, which takes non-negative integer values. Neurons spike at Poisson
rate 1, provided they have strictly positive membrane potential. When a spike
occurs, the potential of the spiking neuron changes to 0, and all neurons
connected to it receive a positive amount of potential. Moreover, between
successive spikes and without receiving any spiking inputs from other neurons,
each neuron's potential behaves independently as a pure death process with
death rate $\gamma \geq 0$. In this article, we show that if the number $d$ of
connections is large enough, then the process exhibits at least two phase
transitions depending on the choice of rate $\gamma$: For large values of
$\gamma$, the neural spiking activity almost surely goes extinct; For small
values of $\gamma$, a fixed neuron spikes infinitely many times with a positive
probability, and for "intermediate" values of $\gamma$, the system has a
positive probability of always presenting spiking activity, but, individually,
each neuron eventually stops spiking and remains at rest forever.
|
For a complex projective manifold, Walker has defined a regular homomorphism
lifting Griffiths' Abel-Jacobi map on algebraically trivial cycle classes to a
complex abelian variety, which admits a finite homomorphism to the Griffiths
intermediate Jacobian. Recently Suzuki gave an alternate, Hodge-theoretic,
construction of this Walker Abel-Jacobi map. We provide a third construction
based on a general lifting property for surjective regular homomorphisms, and
prove that the Walker Abel-Jacobi map descends canonically to any field of
definition of the complex projective manifold. In addition, we determine the
image of the l-adic Bloch map restricted to algebraically trivial cycle classes
in terms of the coniveau filtration.
|
For every $p\in(0,\infty)$, a new metric invariant called umbel $p$-convexity
is introduced. The asymptotic notion of umbel convexity captures the geometry
of countably branching trees, much in the same way as Markov convexity, the
local invariant which inspired it, captures the geometry of bounded degree
trees. Umbel convexity is used to provide a "Poincar\'e-type" metric
characterization of the class of Banach spaces that admit an equivalent norm
with Rolewicz's property $(\beta)$. We explain how a relaxation of umbel
$p$-convexity, called umbel cotype $p$, plays a role in obtaining compression
rate bounds for coarse embeddings of countably branching trees. Local analogs
of these invariants, fork $q$-convexity and fork cotype $q$, are introduced and
their relationship to Markov $q$-convexity and relaxations of the $q$-tripod
inequality is discussed. The metric invariants are estimated for a large class
of Heisenberg groups. Finally, a new characterization of non-negative curvature
is given.
|
Distance metrics and their nonlinear variant play a crucial role in machine
learning based real-world problem solving. We demonstrated how Euclidean and
cosine distance measures differ not only theoretically but also in real-world
medical application, namely, outcome prediction of drug prescription. Euclidean
distance exhibits favorable properties in the local geometry problem. To this
regard, Euclidean distance can be applied under short-term disease with
low-variation outcome observation. Moreover, when presenting to highly variant
chronic disease, it is preferable to use cosine distance. These different
geometric properties lead to different submanifolds in the original embedded
space, and hence, to different optimizing nonlinear kernel embedding
frameworks. We first established the geometric properties that we needed in
these frameworks. From these properties interpreted their differences in
certain perspectives. Our evaluation on real-world, large-scale electronic
health records and embedding space visualization empirically validated our
approach.
|
Federated edge learning (FEEL) is a promising distributed learning technique
for next-generation wireless networks. FEEL preserves the user's privacy,
reduces the communication costs, and exploits the unprecedented capabilities of
edge devices to train a shared global model by leveraging a massive amount of
data generated at the network edge. However, FEEL might significantly shorten
energy-constrained participating devices' lifetime due to the power consumed
during the model training round. This paper proposes a novel approach that
endeavors to minimize computation and communication energy consumption during
FEEL rounds to address this issue. First, we introduce a modified local
training algorithm that intelligently selects only the samples that enhance the
model's quality based on a predetermined threshold probability. Then, the
problem is formulated as joint energy minimization and resource allocation
optimization problem to obtain the optimal local computation time and the
optimal transmission time that minimize the total energy consumption
considering the worker's energy budget, available bandwidth, channel states,
beamforming, and local CPU speed. After that, we introduce a tractable solution
to the formulated problem that ensures the robustness of FEEL. Our simulation
results show that our solution substantially outperforms the baseline FEEL
algorithm as it reduces the local consumed energy by up to 79%.
|
A class of bivariate infinite series solutions of the elliptic and hyperbolic
Kepler equations is described, adding to the handful of 1-D series that have
been found throughout the centuries. This result is based on an iterative
procedure for the analytical computation of all the higher-order partial
derivatives of the eccentric anomaly with respect to the eccentricity $e$ and
mean anomaly $M$ in a given base point $(e_c,M_c)$ of the $(e,M)$ plane.
Explicit examples of such bivariate infinite series are provided, corresponding
to different choices of $(e_c,M_c)$, and their convergence is studied
numerically. In particular, the polynomials that are obtained by truncating the
infinite series up to the fifth degree reach high levels of accuracy in
significantly large regions of the parameter space $(e,M)$. Besides their
theoretical interest, these series can be used for designing 2-D spline
numerical algorithms for efficiently solving Kepler's equations for all values
of the eccentricity and mean anomaly.
|
Galaxy internal structure growth has long been accused of inhibiting star
formation in disc galaxies. We investigate the potential physical connection
between the growth of dispersion-supported stellar structures (e.g. classical
bulges) and the position of galaxies on the star-forming main sequence at
$z\sim0$. Combining the might of the SAMI and MaNGA galaxy surveys, we measure
the $\lambda_{Re}$ spin parameter for 3781 galaxies over $9.5 < \log M_{\star}
[\rm{M}_{\odot}] < 12$. At all stellar masses, galaxies at the locus of the
main sequence possess $\lambda_{Re}$ values indicative of intrinsically
flattened discs. However, above $\log M_{\star}[\rm{M}_{\odot}]\sim10.5$ where
the main sequence starts bending, we find tantalising evidence for an increase
in the number of galaxies with dispersion-supported structures, perhaps
suggesting a connection between bulges and the bending of the main sequence.
Moving above the main sequence, we see no evidence of any change in the typical
spin parameter in galaxies once gravitationally-interacting systems are
excluded from the sample. Similarly, up to 1 dex below the main sequence,
$\lambda_{Re}$ remains roughly constant and only at very high stellar masses
($\log M_{\star}[\rm{M}_{\odot}]>11$), do we see a rapid decrease in
$\lambda_{Re}$ once galaxies decline in star formation activity. If this trend
is confirmed, it would be indicative of different quenching mechanisms acting
on high- and low-mass galaxies. The results suggest that while a population of
galaxies possessing some dispersion-supported structure is already present on
the star-forming main sequence, further growth would be required after the
galaxy has quenched to match the kinematic properties observed in passive
galaxies at $z\sim0$.
|
We propose two deep neural network-based methods for solving semi-martingale
optimal transport problems. The first method is based on a
relaxation/penalization of the terminal constraint, and is solved using deep
neural networks. The second method is based on the dual formulation of the
problem, which we express as a saddle point problem, and is solved using
adversarial networks. Both methods are mesh-free and therefore mitigate the
curse of dimensionality. We test the performance and accuracy of our methods on
several examples up to dimension 10. We also apply the first algorithm to a
portfolio optimization problem where the goal is, given an initial wealth
distribution, to find an investment strategy leading to a prescribed terminal
wealth distribution.
|
We present Meta Learning for Knowledge Distillation (MetaDistil), a simple
yet effective alternative to traditional knowledge distillation (KD) methods
where the teacher model is fixed during training. We show the teacher network
can learn to better transfer knowledge to the student network (i.e., learning
to teach) with the feedback from the performance of the distilled student
network in a meta learning framework. Moreover, we introduce a pilot update
mechanism to improve the alignment between the inner-learner and meta-learner
in meta learning algorithms that focus on an improved inner-learner.
Experiments on various benchmarks show that MetaDistil can yield significant
improvements compared with traditional KD algorithms and is less sensitive to
the choice of different student capacity and hyperparameters, facilitating the
use of KD on different tasks and models. The code is available at
https://github.com/JetRunner/MetaDistil
|
The origin and composition of ultra-high energy cosmic rays (UHECRs) remain a
mystery. The common lore is that UHECRs are deflected from their primary
directions by the Galactic and extragalactic magnetic fields. Here we describe
an extragalactic contribution to the deflection of UHECRs that does not depend
on the strength and orientation of the initial seed field. Using the
IllustrisTNG simulations, we show that outflow-driven magnetic bubbles created
by feedback processes during galaxy formation deflect approximately half of all
$10^{20}$ eV protons by $1^{\circ}$ or more, and up to $20$-$30^{\circ}$. This
implies that the deflection in the intergalactic medium must be taken into
account in order to identify the sources of UHECRs.
|
To understand the true nature of black holes, fundamental theoretical
developments should be linked all the way to observational features of black
holes in their natural astrophysical environments. Here, we take several steps
to establish such a link. We construct a family of spinning, regular black-hole
spacetimes based on a locality principle for new physics and analyze their
shadow images. We identify characteristic image features associated to
regularity (increased compactness and relative stretching) and to the locality
principle (cusps and asymmetry) that persist in the presence of a simple
analytical disk model. We conjecture that these occur as universal features of
distinct classes of regular black holes based on different sets of construction
principles for the corresponding spacetimes.
|
We develop a convergence-rate analysis of momentum with cyclical step-sizes.
We show that under some assumption on the spectral gap of Hessians in machine
learning, cyclical step-sizes are provably faster than constant step-sizes.
More precisely, we develop a convergence rate analysis for quadratic objectives
that provides optimal parameters and shows that cyclical learning rates can
improve upon traditional lower complexity bounds. We further propose a
systematic approach to design optimal first order methods for quadratic
minimization with a given spectral structure. Finally, we provide a local
convergence rate analysis beyond quadratic minimization for the proposed
methods and illustrate our findings through benchmarks on least squares and
logistic regression problems.
|
We study the constraints imposed by perturbative unitarity on the new physics
interpretation of the muon $g-2$ anomaly. Within a Standard Model Effective
Field Theory (SMEFT) approach, we find that scattering amplitudes sourced by
effective operators saturate perturbative unitarity at about 1 PeV. This
corresponds to the highest energy scale that needs to be probed in order to
resolve the new physics origin of the muon $g-2$ anomaly. On the other hand,
simplified models (e.g.~scalar-fermion Yukawa theories) in which renormalizable
couplings are pushed to the boundary of perturbativity still imply new on-shell
states below 200 TeV. We finally suggest that the highest new physics scale
responsible for the anomalous effect can be reached in non-renormalizable
models at the PeV scale.
|
Self-Admitted Technical Debt (SATD) is a metaphorical concept to describe the
self-documented addition of technical debt to a software project in the form of
source code comments. SATD can linger in projects and degrade source-code
quality, but it can also be more visible than unintentionally added or
undocumented technical debt. Understanding the implications of adding SATD to a
software project is important because developers can benefit from a better
understanding of the quality trade-offs they are making. However, empirical
studies, analyzing the survivability and removal of SATD comments, are
challenged by potential code changes or SATD comment updates that may interfere
with properly tracking their appearance, existence, and removal. In this paper,
we propose SATDBailiff, a tool that uses an existing state-of-the-art SATD
detection tool, to identify SATD in method comments, then properly track their
lifespan. SATDBailiff is given as input links to open source projects, and its
output is a list of all identified SATDs, and for each detected SATD,
SATDBailiff reports all its associated changes, including any updates to its
text, all the way to reporting its removal. The goal of SATDBailiff is to aid
researchers and practitioners in better tracking SATDs instances and providing
them with a reliable tool that can be easily extended. SATDBailiff was
validated using a dataset of previously detected and manually validated SATD
instances.
SATDBailiff is publicly available as an open-source, along with the manual
analysis of SATD instances associated with its validation, on the project
website
|
Requirements engineering (RE) is a key area to address sustainability
concerns in system development. Approaches have been proposed to elicit
sustainability requirements from interested stakeholders before system design.
However, existing strategies lack the proper high-level view to deal with the
societal and long-term impacts of the transformation entailed by the
introduction of a new technological solution. This paper proposes to go beyond
the concept of system requirements and stakeholders' goals, and raise the
degree of abstraction by focusing on the notions of drivers, barriers and
impacts that a system can have on the environment in which it is deployed.
Furthermore, we suggest to narrow the perspective to a single domain, as the
effect of a technology is context-dependent. To put this vision into practice,
we interview 30 cross-disciplinary experts in the representative domain of
rural areas, and we analyse the transcripts to identify common themes. As a
result, we provide drivers, barriers and positive or negative impacts
associated to the introduction of novel technical solutions in rural areas.
This RE-relevant information could hardly be identified if interested
stakeholders were interviewed before the development of a single specific
system. This paper contributes to the literature with a fresh perspective on
sustainability requirements, and with a domain-specific framework grounded on
experts' opinions. The conceptual framework resulting from our analysis can be
used as a reference baseline for requirements elicitation endeavours in rural
areas that need to account for sustainability concerns.
|
In the Standard Model of particle physics, charged leptons of different
flavour couple to the electroweak force carriers with the same interaction
strength. This property, known as lepton flavour universality, was found to be
consistent with experimental evidence in a wide range of particle decays.
Lepton flavour universality can be tested by comparing branching fractions in
ratios such as $R_K = \mathcal{B}(B^+ \rightarrow K^+ \mu^+
\mu^-)/\mathcal{B}(B^+ \rightarrow K^+ e^+ e^-)$. This observable is measured
using proton-proton collision data recorded with the LHCb detector at CERN's
Large Hadron Collider corresponding to an integrated luminosity of $9
\rm{~fb}^{-1}$. For a dilepton invariant mass range of $q^{2} \in
[1.1,6.0]~\rm{Ge}\kern -0.1em \rm{V}^{2}$, the measured value of
$R_{K}=0.846\,^{+\,0.042}_{-\,0.039}\,^{+\,0.013}_{-\,0.012}$, where the first
uncertainty is statistic and the second systematic, is in tension with the
Standard Model predicted value at the $3.1\sigma$ level raising evidence for
lepton flavour universality violation in $B^+ \rightarrow K^+ \ell^+ \ell^-$
decays.
|
In a modern distributed storage system, storage nodes are organized in racks,
and the cross-rack communication dominates the system bandwidth. In this paper,
we focus on the rack-aware storage system. The initial setting was immediately
repairing every single node failure. However, multiple node failures are
frequent, and some systems may even wait for multiple nodes failures to occur
before repairing them in order to keep costs down. For the purpose of still
being able to repair them properly when multiple failures occur, we relax the
repair model of the rack-aware storage system. In the repair process, the
cross-rack connections (i.e., the number of helper racks connected for repair
which is called repair degree) and the intra-rack connections (i.e., the number
of helper nodes in the rack contains the failed node) are all reduced. We focus
on minimizing the cross-rack bandwidth in the rack-aware storage system with
multiple erasure tolerances. First, the fundamental tradeoff between the repair
bandwidth and the storage size for functional repair is established. Then, the
two extreme points corresponding to the minimum storage and minimum cross-rack
repair bandwidth are obtained. Second, the explicitly construct corresponding
to the two points are given. Both of them have minimum sub-packetization level
(i.e., the number of symbols stored in each node) and small repair degree.
Besides, the size of underlying finite field is approximately the block length
of the code. Finally, for the convenience of practical use, we also establish a
transformation to convert our codes into systematic codes.
|
The evolution of electronic media is a mixed blessing. Due to the easy
access, low cost, and faster reach of the information, people search out and
devour news from online social networks. In contrast, the increasing acceptance
of social media reporting leads to the spread of fake news. This is a minacious
problem that causes disputes and endangers societal stability and harmony. Fake
news spread has gained attention from researchers due to its vicious nature.
proliferation of misinformation in all media, from the internet to cable news,
paid advertising and local news outlets, has made it essential for people to
identify the misinformation and sort through the facts. Researchers are trying
to analyze the credibility of information and curtail false information on such
platforms. Credibility is the believability of the piece of information at
hand. Analyzing the credibility of fake news is challenging due to the intent
of its creation and the polychromatic nature of the news. In this work, we
propose a model for detecting fake news. Our method investigates the content of
the news at the early stage i.e. when the news is published but is yet to be
disseminated through social media. Our work interprets the content with
automatic feature extraction and the relevance of the text pieces. In summary,
we introduce stance as one of the features along with the content of the
article and employ the pre-trained contextualized word embeddings BERT to
obtain the state-of-art results for fake news detection. The experiment
conducted on the real-world dataset indicates that our model outperforms the
previous work and enables fake news detection with an accuracy of 95.32%.
|
Machine-learning force fields (MLFF) should be accurate, computationally and
data efficient, and applicable to molecules, materials, and interfaces thereof.
Currently, MLFFs often introduce tradeoffs that restrict their practical
applicability to small subsets of chemical space or require exhaustive datasets
for training. Here, we introduce the Bravais-Inspired Gradient-Domain Machine
Learning (BIGDML) approach and demonstrate its ability to construct reliable
force fields using a training set with just 10-200 geometries for materials
including pristine and defect-containing 2D and 3D semiconductors and metals,
as well as chemisorbed and physisorbed atomic and molecular adsorbates on
surfaces. The BIGDML model employs the full relevant symmetry group for a given
material, does not assume artificial atom types or localization of atomic
interactions and exhibits high data efficiency and state-of-the-art energy
accuracies (errors substantially below 1 meV per atom) for an extended set of
materials. Extensive path-integral molecular dynamics carried out with BIGDML
models demonstrate the counterintuitive localization of benzene--graphene
dynamics induced by nuclear quantum effects and allow to rationalize the
Arrhenius behavior of hydrogen diffusion coefficient in a Pd crystal for a wide
range of temperatures.
|
We study the effects of elastic anisotropy on the Landau-de Gennes critical
points for nematic liquid crystals, in a square domain. The elastic anisotropy
is captured by a parameter, $L_2$, and the critical points are described by
three degrees of freedom. We analytically construct a symmetric critical point
for all admissible values of $L_2$, which is necessarily globally stable for
small domains i.e., when the square edge length, $\lambda$, is small enough. We
perform asymptotic analyses and numerical studies to discover at least $5$
classes of these symmetric critical points - the $WORS$, $Ring^{\pm}$,
$Constant$ and $pWORS$ solutions, of which the $WORS$, $Ring^+$ and $Constant$
solutions can be stable. Furthermore, we demonstrate that the novel $Constant$
solution is energetically preferable for large $\lambda$ and large $L_2$, and
prove associated stability results that corroborate the stabilising effects of
$L_2$ for reduced Landau-de Gennes critical points. We complement our analysis
with numerically computed bifurcation diagrams for different values of $L_2$,
which illustrate the interplay of elastic anisotropy and geometry for nematic
solution landscapes, at low temperatures.
|
Conversational agents (CAs) represent an emerging research field in health
information systems, where there are great potentials in empowering patients
with timely information and natural language interfaces. Nevertheless, there
have been limited attempts in establishing prescriptive knowledge on designing
CAs in the healthcare domain in general, and diabetes care specifically. In
this paper, we conducted a Design Science Research project and proposed three
design principles for designing health-related CAs that embark on artificial
intelligence (AI) to address the limitations of existing solutions. Further, we
instantiated the proposed design and developed AMANDA - an AI-based
multilingual CA in diabetes care with state-of-the-art technologies for
natural-sounding localised accent. We employed mean opinion scores and system
usability scale to evaluate AMANDA's speech quality and usability,
respectively. This paper provides practitioners with a blueprint for designing
CAs in diabetes care with concrete design guidelines that can be extended into
other healthcare domains.
|
In this work, we propose three pilot assignment schemes to reduce the effect
of pilot contamination in cell-free massive multiple-input-multiple-output
(MIMO) systems. Our first algorithm, which is based on the idea of random
sequential adsorption (RSA) process from the statistical physics literature,
can be implemented in a distributed and scalable manner while ensuring a
minimum distance among the co-pilot users. Further, leveraging the rich
literature of the RSA process, we present an approximate analytical approach to
accurately determine the density of the co-pilot users as well as the pilot
assignment probability for the typical user in this network. We also develop
two optimization-based centralized pilot allocation schemes with the primary
goal of benchmarking the RSA-based scheme. The first centralized scheme is
based only on the user locations (just like the RSA-based scheme) and
partitions the users into sets of co-pilot users such that the minimum distance
between two users in a partition is maximized. The second centralized scheme
takes both user and remote radio head (RRH) locations into account and provides
a near-optimal solution in terms of sum-user spectral efficiency (SE). The
general idea is to first cluster the users with similar propagation conditions
with respect to the RRHs using spectral graph theory and then ensure that the
users in each cluster are assigned different pilots using the branch and price
(BnP) algorithm. Our simulation results demonstrate that despite admitting
distributed implementation, the RSA-based scheme has a competitive performance
with respect to the first centralized scheme in all regimes as well as to the
near-optimal second scheme when the density of RRHs is high.
|
For those deformations that satisfy a certain non-degeneracy condition, we
describe the structure of certain simple modules of the deformations of the
subcharacter algebra of a finite group. For finite abelian groups, we prove
that the deformation given by the inclusion of the natural numbers, which
corresponds to the algebra generated by the fibred bisets over a field of
characteristic zero, is not semisimple. In the cyclic group of prime order
case, we provide a complete description of the semisimple deformations.
|
We present a quantitative, near-term experimental blueprint for the quantum
simulation of topological insulators using lattice-trapped ultracold polar
molecules. In particular, we focus on the so-called Hopf insulator, which
represents a three-dimensional topological state of matter existing outside the
conventional tenfold way and crystalline-symmetry-based classifications of
topological insulators. Its topology is protected by a \emph{linking number}
invariant, which necessitates long-range spin-orbit coupled hoppings for its
realization. While these ingredients have so far precluded its realization in
solid state systems and other quantum simulation architectures, in a companion
manuscript [1901.08597] we predict that Hopf insulators can in fact arise
naturally in dipolar interacting systems. Here, we investigate a specific such
architecture in lattices of polar molecules, where the effective `spin' is
formed from sublattice degrees of freedom. We introduce two techniques that
allow one to optimize dipolar Hopf insulators with large band gaps, and which
should also be readily applicable to the simulation of other exotic
bandstructures. First, we describe the use of Floquet engineering to control
the range and functional form of dipolar hoppings and second, we demonstrate
that molecular AC polarizabilities (under circularly polarized light) can be
used to precisely tune the resonance condition between different rotational
states. To verify that this latter technique is amenable to current generation
experiments, we calculate from first principles the AC polarizability for
$\sigma^+$ light for ${}^{40}$K$^{87}$Rb. Finally, we show that experiments are
capable of detecting the unconventional topology of the Hopf insulator by
varying the termination of the lattice at its edges, which gives rise to three
distinct classes of edge mode spectra.
|
This paper investigates the relationship between the spread of the COVID-19
pandemic, the state of community activity, and the financial index performance
across 20 countries. First, we analyze which countries behaved similarly in
2020 with respect to one of three multivariate time series: daily COVID-19
cases, Apple mobility data and national equity index price. Next, we study the
trajectories of all three of these attributes in conjunction to determine which
exhibited greater similarity. Finally, we investigate whether country financial
indices or mobility data responded quicker to surges in COVID-19 cases. Our
results indicate that mobility data and national financial indices exhibited
the most similarity in their trajectories, with financial indices responding
quicker. This suggests that financial market participants may have interpreted
and responded to COVID-19 data more efficiently than governments. Further,
results imply that efforts to study community mobility data as a leading
indicator for financial market performance during the pandemic were misguided.
|
We introduce two methods for improving the performance of agents meeting for
the first time to accomplish a communicative task. The methods are: (1)
`message mutation' during the generation of the communication protocol; and (2)
random permutations of the communication channel. These proposals are tested
using a simple two-player game involving a `teacher' who generates a
communication protocol and sends a message, and a `student' who interprets the
message. After training multiple agents via self-play we analyse the
performance of these agents when they are matched with a stranger, i.e. their
zero-shot communication performance. We find that both message mutation and
channel permutation positively influence performance, and we discuss their
effects.
|
This position paper draws from the complexity of dark patterns to develop
arguments for differentiated interventions. We propose a matrix of
interventions with a \textit{measure axis} (from user-directed to
environment-directed) and a \textit{scope axis} (from general to specific). We
furthermore discuss a set of interventions situated in different fields of the
intervention spaces. The discussions at the 2021 CHI workshop "What can CHI do
about dark patterns?" should help hone the matrix structure and fill its fields
with specific intervention proposals.
|
Cyclic codes are among the most important families of codes in coding theory
for both theoretical and practical reasons. Despite their prominence and
intensive research on cyclic codes for over a half century, there are still
open problems related to cyclic codes. In this work, we use recent results on
the equivalence of cyclic codes to create a more efficient algorithm to
partition cyclic codes by equivalence based on cyclotomic cosets. This
algorithm is then implemented to carry out computer searches for both cyclic
codes and quasi-cyclic (QC) codes with good parameters. We also generalize
these results to repeated-root cases. We have found several new linear codes
that are cyclic or QC as an application of the new approach, as well as more
desirable constructions for linear codes with best known parameters. With the
additional new codes obtained through standard constructions, we have found a
total of 14 new linear codes.
|
Nonlocal quantum correlations among the quantum subsystems play essential
roles in quantum science. The violation of the Svetlichny inequality provides
sufficient conditions of genuine tripartite nonlocality. We provide tight upper
bounds on the maximal quantum value of the Svetlichny operators under local
filtering operations, and present a qualitative analytical analysis on the
hidden genuine nonlocality for three-qubit systems. We investigate in detail
two classes of three-qubit states whose hidden genuine nonlocalities can be
revealed by local filtering.
|
We present a novel method named truncated hierarchical unstructured splines
(THU-splines) that supports both local $h$-refinement and unstructured
quadrilateral meshes. In a THU-spline construction, an unstructured
quadrilateral mesh is taken as the input control mesh, where the
degenerated-patch method [18] is adopted in irregular regions to define
$C^1$-continuous bicubic splines, whereas regular regions only involve $C^2$
B-splines. Irregular regions are then smoothly joined with regular regions
through the truncation mechanism [29], leading to a globally smooth spline
construction. Subsequently, local refinement is performed following the
truncated hierarchical B-spline construction [10] to achieve a flexible
refinement without propagating to unanticipated regions. Challenges lie in
refining transition regions where a mixed types of splines play a role.
THU-spline basis functions are globally $C^1$-continuous and are non-negative
everywhere except near extraordinary vertices, where slight negativity is
inevitable to retain refinability of the spline functions defined using the
degenerated-patch method. Such functions also have a finite representation that
can be easily integrated with existing finite element or isogeometric codes
through B\'{e}zier extraction.
|
A new technical method is developed for soft x-ray spectroscopy of near-edge
x-ray absorption fine structure (NEXAFS). The measurement is performed with
continuously rotating linearly polarized light over 2$\pi$, generated by a
segmented undulator. A demonstration of the rotational NEXAFS experiment was
successfully made with a 2D film, showing detailed polarization-dependence in
intensity of the molecular orbitals. The present approach provides varieties of
technical opportunities that are compatible with the state-of-the-art
experiments in nano-space and under the $operando$ condition.
|
This article concerns the predictive modeling for spatio-temporal data as
well as model interpretation using data information in space and time. We
develop a novel approach based on supervised dimension reduction for such data
in order to capture nonlinear mean structures without requiring a prespecified
parametric model. In addition to prediction as a common interest, this approach
emphasizes the exploration of geometric information from the data. The method
of Pairwise Directions Estimation (PDE; Lue, 2019) is implemented in our
approach as a data-driven function searching for spatial patterns and temporal
trends. The benefit of using geometric information from the method of PDE is
highlighted, which aids effectively in exploring data structures. We further
enhance PDE, referring to it as PDE+, by incorporating kriging to estimate the
random effects not explained in the mean functions. Our proposal can not only
increase prediction accuracy, but also improve the interpretation for modeling.
Two simulation examples are conducted and comparisons are made with four
existing methods. The results demonstrate that the proposed PDE+ method is very
useful for exploring and interpreting the patterns and trends for
spatio-temporal data. Illustrative applications to two real datasets are also
presented.
|
This paper describes a recently initiated research project aiming at
supporting development of computerised dialogue systems that handle breaches of
conversational norms such as the Gricean maxims, which describe how dialogue
participants ideally form their utterances in order to be informative,
relevant, brief, etc. Our approach is to model dialogue and norms with
co-operating distributed grammar systems (CDGSs), and to develop methods to
detect breaches and to handle them in dialogue systems for verbal human-robot
interaction.
|
As more people flock to social media to connect with others and form virtual
communities, it is important to research how members of these groups interact
to understand human behavior on the Web. In response to an increase in hate
speech, harassment and other antisocial behaviors, many social media companies
have implemented different content and user moderation policies. On Reddit, for
example, communities, i.e, subreddits, are occasionally banned for violating
these policies. We study the effect of these regulatory actions as well as when
a community experiences a significant external event like a political election
or a market crash. Overall, we find that most subreddit bans prompt a small,
but statistically significant, number of active users to leave the platform;
the effect of external events varies with the type of event. We conclude with a
discussion on the effectiveness of the bans and wider implications for the
online content moderation.
|
We isolate a new preservation class of Suslin forcings and prove several
associated consistency results in the choiceless theory ZF+DC regarding
countable chromatic numbers of various Borel hypergraphs.
|
Extremely compact objects trap gravitational waves or neutrinos, assumed to
move along null geodesics in the trapping regions. The trapping of neutrinos
was extensively studied for spherically symmetric extremely compact objects
constructed under the simplest approximation of the uniform energy density
distribution, with radius located under the photosphere of the external
spacetime; in addition, uniform emissivity distribution of neutrinos was
assumed in these studies. Here we extend the studies of the neutrino trapping
for the case of the extremely compact Tolman VII objects representing the
simplest generalization of the internal Schwarzschild solution with uniform
distribution of the energy density, and the correspondingly related
distribution of the neutrino emissivity that is thus again proportional to the
energy density; radius of such extremely compact objects can overcome the
photosphere of the external Schwarzschild spacetime. In dependence on the
parameters of the Tolman VII spacetimes, we determine the "local" and "global"
coefficients of efficiency of the trapping and demonstrate that the role of the
trapping is significantly stronger than in the internal Schwarzschild
spacetimes. Our results indicate possible influence of the neutrino trapping in
cooling of neutron stars.
|
Quantum computing provides a new way for approaching problem solving,
enabling efficient solutions for problems that are hard on classical computers.
It is based on leveraging how quantum particles behave. With researchers around
the world showing quantum supremacy and the availability of cloud-based quantum
computers with free accounts for researchers, quantum computing is becoming a
reality. In this paper, we explore both the opportunities and challenges that
quantum computing has for location determination research. Specifically, we
introduce an example for the expected gain of using quantum algorithms by
providing an efficient quantum implementation of the well-known RF
fingerprinting algorithm and run it on an instance of the IBM Quantum
Experience computer. The proposed quantum algorithm has a complexity that is
exponentially better than its classical algorithm version, both in space and
running time. We further discuss both software and hardware research challenges
and opportunities that researchers can build on to explore this exciting new
domain.
|