text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
With the increase in size of web, the information is also spreading at large scale. Search Engines are the medium to access this information. Crawler is the module of search engine which is responsible for download the web pages. In order to download the fresh information and get the database rich, crawler should crawl the web in some order. This is called as ordering of URLs. URL ordering should be done in efficient and effective manner in order to crawl the web in proficient manner. In this paper, a survey is done on some existing methods of URL ordering and at the end of this paper comparison is also carried out among them. | URL ordering policies for distributed crawlers: a review |
The speech representations learned from large-scale unlabeled data have shown better generalizability than those from supervised learning and thus attract a lot of interest to be applied for various downstream tasks. In this paper, we explore the limits of speech representations learned by different self-supervised objectives and datasets for automatic speaker verification (ASV), especially with a well-recognized SOTA ASV model, ECAPA-TDNN [1], as a downstream model. The representations from all hidden layers of the pre-trained model are firstly averaged with learnable weights and then fed into the ECAPA-TDNN as input features. The experimental results on Voxceleb dataset show that the weighted average representation is significantly superior to FBank, a conventional handcrafted feature for ASV. Our best single system achieves 0.537%, 0.569%, and 1.180% equal error rate (EER) on the three official trials of VoxCeleb1, separately. Accordingly, the ensemble system with three pre-trained models can further improve the EER to 0.479%, 0.536% and 1.023%. Among the three evaluation trials, our best system outperforms the winner system [2] of the VoxCeleb Speaker Recognition Challenge 2021 (VoxSRC2021) on the VoxCeleb1-E trial. | Large-scale Self-Supervised Speech Representation Learning for Automatic Speaker Verification |
The Planck Collaboration has recently released maps of the microwave sky in both temperature and polarization. Diffuse astrophysical components (including Galactic emissions, cosmic far infrared (IR) background, y-maps of the thermal Sunyaev-Zeldovich (SZ) effect) and catalogs of many thousands of Galactic and extragalactic radio and far-IR sources, and galaxy clusters detected through the SZ effect are the main astrophysical products of the mission. A concise overview of these results and of astrophysical studies based on Planck data is presented. | Astrophysical components from Planck maps |
Regression discontinuity design (RDD) is a quasi-experimental approach to study the causal effects of an intervention/treatment on later health outcomes. It exploits a continuously measured assignment variable with a clearly defined cut-off above or below which the population is at least partially assigned to the intervention/treatment. We describe the RDD and outline the applications of RDD in the context of perinatal epidemiology and birth cohort research. There is an increasing number of studies using RDD in perinatal and pediatric epidemiology. Most of these studies were conducted in the context of education, social and welfare policies, healthcare organization, insurance, and preventive programs. Additional thematic fields include clinically relevant research questions, shock events, social and environmental factors, and changes in guidelines. Maternal and perinatal characteristics, such as age, birth weight and gestational age are frequently used assignment variables to study the effects of the type and intensity of neonatal care, health insurance, and supplemental newborn benefits. Different socioeconomic measures have been used to study the effects of social, welfare and cash transfer programs, while age or date of birth served as assignment variables to study the effects of vaccination programs, pregnancy-specific guidelines, maternity and paternity leave policies and introduction of newborn-based welfare programs. RDD has advantages, including relatively weak and testable assumptions, strong internal validity, intuitive interpretation, and transparent and simple graphical representation. However, its use in birth cohort research is hampered by the rarity of settings outside of policy and program evaluations, low statistical power, limited external validity (geographic- and time-specific settings) and potential contamination by other exposures/interventions. | Regression discontinuity design in perinatal epidemiology and birth cohort research |
The structure of complex networks can be characterized by counting and analyzing network motifs. Motifs are small subgraphs that occur repeatedly in a network, such as triangles or chains. Recent work has generalized motifs to temporal and dynamic network data. However, existing techniques do not generalize to sequential or trajectory data, which represents entities moving through the nodes of a network, such as passengers moving through transportation networks. The unit of observation in these data is fundamentally different, since we analyze full observations of trajectories (e.g., a trip from airport A to airport C through airport B), rather than independent observations of edges or snapshots of graphs over time. In this work, we define sequential motifs in trajectory data, which are small, directed, and edge-weighted subgraphs corresponding to patterns in observed sequences. We draw a connection between counting and analysis of sequential motifs and Higher-Order Network (HON) models. We show that by mapping edges of a HON, specifically a $k$th-order DeBruijn graph, to sequential motifs, we can count and evaluate their importance in observed data. We test our methodology with two datasets: (1) passengers navigating an airport network and (2) people navigating the Wikipedia article network. We find that the most prevalent and important sequential motifs correspond to intuitive patterns of traversal in the real systems, and show empirically that the heterogeneity of edge weights in an observed higher-order DeBruijn graph has implications for the distributions of sequential motifs we expect to see across our null models. | Sequential Motifs in Observed Walks |
The hypothesis that pulsar wind nebulae (PWNe) can significantly contribute to the excess of the positron ($e^+$) cosmic-ray flux has been consolidated after the observation of a $\gamma$-ray emission at TeV energies of a few degree size around Geminga and Monogem PWNe, and at GeV energies for Geminga at a much larger extension. The $\gamma$-ray halos around these PWNe are interpreted as due to electrons ($e^-$) and $e^+$ accelerated and escaped by their PWNe, and inverse Compton scattering low-energy photons of the interstellar radiation fields. The extension of these halos suggests that the diffusion around these PWNe is suppressed by two orders of magnitude with respect to the average in the Galaxy. We implement a two-zone diffusion model for the propagation of $e^+$ accelerated by the Galactic population of PWNe. We consider pulsars from the ATNF catalog and build up simulations of the PWN Galactic population. In both scenarios, we find that within a two-zone diffusion model, the total contribution from PWNe and secondary $e^+$ is at the level of AMS-02 data, for an efficiency of conversion of the pulsar spin down energy in $e^\pm$ of $\eta\sim0.1$. For the simulated PWNe, a $1\sigma$ uncertainty band is determined, which is of at least one order of magnitude from 10 GeV up to few TeV. The hint for a decreasing $e^+$ flux at TeV energies is found, even if it is strongly connected to the chosen value of the radius of the low diffusion bubble around each source. | Contribution of pulsars to cosmic-ray positrons in light of recent observation of inverse-Compton halos |
We study $q$-pushTASEP, a discrete time interacting particle system whose distribution is related to the $q$-Whittaker measure. We prove a uniform in $N$ lower tail bound on the fluctuation scale for the location $x_N(N)$ of the right-most particle at time $N$ when started from step initial condition. Our argument relies on a map from the $q$-Whittaker measure to a model of periodic last passage percolation (LPP) with geometric weights in an infinite strip that was recently established in [arXiv:2106.11922]. By a path routing argument we bound the passage time in the periodic environment in terms of an infinite sum of independent passage times for standard LPP on $N\times N$ squares with geometric weights whose parameters decay geometrically. To prove our tail bound result we combine this reduction with a concentration inequality, and a crucial new technical result -- lower tail bounds on $N\times N$ last passage times uniformly over all $N \in \mathbb N$ and all the geometric parameters in $(0,1)$. This technical result uses Widom's trick [arXiv:math/0108008] and an adaptation of an idea of Ledoux introduced for the GUE [Led05a] to reduce the uniform lower tail bound to uniform asymptotics for very high moments, up to order $N$, of the Meixner ensemble. This we accomplish by first obtaining sharp uniform estimates for factorial moments of the Meixner ensemble from an explicit combinatorial formula of Ledoux [Led05b], and translating them to polynomial bounds via a further careful analysis and delicate cancellation. | The lower tail of $q$-pushTASEP |
We extended the known result that symbols from modulation spaces $M^{\infty,1}(\mathbb{R}^{2n})$, also known as the Sj\"{o}strand's class, produce bounded operators in $L^2(\mathbb{R}^n)$, to general $L^p$ boundedness at the cost of lost of derivatives. Indeed, we showed that pseudo-differential operators acting from $L^p$-Sobolev spaces $L^p_s(\mathbb{R}^n)$ to $L^p(\mathbb{R}^n)$ spaces with symbols from the modulation space $M^{\infty,1}(\mathbb{R}^{2n})$ are bounded, whenever $s\geq n|1/p-1/2|.$ This estimate is sharp for all $1\leq p\leq\infty$. | On $L^p-$boundedness of pseudo-differential operators of Sj\"ostrand's class |
Cherenkov radiation in uniformly moving homogenous isotropic medium without dispersion is studied. Formula for the spectrum of Cherenkov radiation of fermion was derived for the case when the speed of the medium is less than the speed of light in this medium at rest. The properties of Cherenkov spectrum are investigated. | Cherenkov radiation in moving medium |
Stars collect most of their mass during the protostellar stage, yet the accretion luminosity and stellar parameters, which are needed to compute the mass accretion rate, are poorly constrained for the youngest sources. The aim of this work is to fill this gap, computing the stellar properties and the accretion rates for a large sample of Class I protostars located in nearby (< 500 pc) star-forming regions and analysing their interplay. We used a self-consistent method to provide accretion and stellar parameters using SED modeling and veiling information from near-IR observations, when possible. We calculated accretion and stellar properties for the first time for 50 young stars. We focused our analysis on the 39 confirmed protostars, finding that their mass accretion rate varies between about 10^(-8) and about 10^(-4) Msun/yr in a stellar mass range between about 0.1 and 3 Msun. We find systematically larger mass accretion rates for our Class I sample than in Class II objects. Although the mass accretion rate we found is high, it still suggests that either stars collect most of its mass before Class I stage, or eruptive accretion is needed during the overall protostellar phase. Indeed, our results suggest that for a large number of protostars the disk can be unstable, which can result in accretion bursts and disk fragmentation in the past or in the future. | The Mass Accretion Rate and Stellar Properties in Class I Protostars |
An augmented Lagrangian (AL) can convert a constrained optimization problem into a sequence of simpler (e.g., unconstrained) problems, which are then usually solved with local solvers. Recently, surrogate-based Bayesian optimization (BO) sub-solvers have been successfully deployed in the AL framework for a more global search in the presence of inequality constraints; however, a drawback was that expected improvement (EI) evaluations relied on Monte Carlo. Here we introduce an alternative slack variable AL, and show that in this formulation the EI may be evaluated with library routines. The slack variables furthermore facilitate equality as well as inequality constraints, and mixtures thereof. We show how our new slack "ALBO" compares favorably to the original. Its superiority over conventional alternatives is reinforced on several mixed constraint examples. | Bayesian optimization under mixed constraints with a slack-variable augmented Lagrangian |
We complete previous investigations on the statistics of velocity fluctuations arising from a random distribution of point vortices in two-dimensional hydrodynamics. We show that, on a statistical sense, the velocity created by a point vortex is shielded by cooperative effects on a distance $\Lambda \sim n^{-1/2}$, the inter-vortex separation. For $R\gg \Lambda$, the ``effective'' velocity decays as $R^{-2}$ instead of the ordinary law $R^{-1}$ recovered for $R\ll \Lambda$. These results are similar to those obtained by Agekyan [Sov. Astron. 5 (1962) 809] in his investigations on the fluctuations of the gravitational field. They give further support to our previous observation that the statistics of velocity fluctuations are (marginally) dominated by the contribution of the nearest neighbor. | On the effective velocity created by a point vortex in two-dimensional hydrodynamics |
Using molecular dynamics simulations, we study a spherically-symmetric ``two-scale'' Jagla potential with both repulsive and attractive ramps. This potential displays a liquid-liquid phase transition with a positively sloped coexistence line ending at critical point well above the equilibrium melting line. We study the dynamic behavior in the vicinity of this liquid-liquid critical point. We find that the dynamics in the more ordered high density phase (HDL) are much slower then the dynamics in the less ordered low density phase (LDL). Moreover, the behavior of the diffusion constant and relaxation time in the HDL phase follows approximately an Arrhenius law, while in the LDL phase the slope of the Arrhenius fit increases upon cooling. On the other hand, if we cool the system at constant pressure above the critical pressure behavior of the dynamics smoothly changes with temperature. It resembles the behavior of the LDL at high temperatures and resembles the behavior of the HDL at low temperatures. This dynamic crossover happens in the vicinity of the Widom line (the extension of the coexistence line into the one-phase region) which also has a positive slope. Our work suggests a possible general relation between a liquid-liquid phase transition and the change in dynamics. | Relation between the Liquid-Liquid Phase Transition and Dynamic Behavior in the Jagla Model |
The possibility of a dark sector weakly coupling to Standard Model (SM) particles through new light mediators is explored at the Belle II experiment. We present here results from three different searches, for a long-lived (pseudo)scalar particle in rare $B$ decays; for a di-tau resonance in four-muon final states, and the update on the search for a \zprime\ boson decaying invisibly. We also look for lepton flavor violation by searching for $\tau\rightarrow\ell \alpha$ decays, with $\alpha$ a new invisible boson, and we report the first untagged reconstruction of $\tau$ pairs events searching for the neutrinoless decays $\tau \to \ell \phi$. Finally, we present the world's most precise measurement of the $\tau$ lepton mass. These studies are performed on samples from the data collected by the Belle II detector during 2019-2021 data taking. | Dark sectors and {\tau} physics at Belle II |
Material systems with Dirac electrons on a bipartite planar lattice and possessing superconducting and excitonic interactions are investigated both in the half-filling and doped regimes at zero temperature. Excitonic pairing is the analog of chiral symmetry breaking of relativistic fermion theories and produces an insulating gap in the electronic spectrum. Condensed matter systems with such competing interactions display phenomena that are analogous to the onset of the chiral condensate and of color superconductivity in dense quark matter. Evaluation of the free-energy (effective potential) allows us to map the phases of the system for different values of the couplings of each interaction. At half-filling, we show that Cooper pairs and excitons can coexist if the superconducting and excitonic interactions strengths are equal and above a quantum critical point, which is evaluated. If one of the interactions is stronger than the other, then only the corresponding order parameter is non-vanishing and we do not have coexistence. For nonzero values of chemical potential, the phase diagram for each interaction is obtained independently. Taking into account only the excitonic interaction, a critical chemical potential, as a function of the interaction strength, is obtained. If only the superconducting interaction is considered, the superconducting gap displays a characteristic dome as charge carriers are doped into the system and our results qualitatively reproduce the superconducting phase diagram of several compounds, like 122 pnictides and cuprate superconductors. We also analyze the possibility of coexistence between Cooper pairs and excitons and we show that, even if the excitonic interaction strength is greater than the superconducting interaction, as the chemical potential increases, superconductivity tends to suppress the excitonic order parameter. | Superconducting and excitonic quantum phase transitions in doped systems with Dirac electrons |
The beat time {\tau}_{fpt} associated with the energy transfer between two coupled oscillators is dictated by the bandwidth theorem which sets a lower bound {\tau}_{fpt}\sim 1/{\delta}{\omega}. We show, both experimentally and theoretically, that two coupled active LRC electrical oscillators with parity-time (PT) symmetry, bypass the lower bound imposed by the bandwidth theorem, reducing the beat time to zero while retaining a real valued spectrum and fixed eigenfrequency difference {\delta}{\omega}. Our results foster new design strategies which lead to (stable) pseudo-unitary wave evolution, and may allow for ultrafast computation, telecommunication, and signal processing. | Bypassing the bandwidth theorem with PT symmetry |
Capacity formulas and random-coding exponents are derived for a generalized family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic upper bounds on the achievable log probability of error. In our model, information is to be reliably transmitted through a noisy channel with finite input and output alphabets and random state sequence, and the channel is selected by a hypothetical adversary. Partial information about the state sequence is available to the encoder, adversary, and decoder. The design of the transmitter is subject to a cost constraint. Two families of channels are considered: 1) compound discrete memoryless channels (CDMC), and 2) channels with arbitrary memory, subject to an additive cost constraint, or more generally to a hard constraint on the conditional type of the channel output given the input. Both problems are closely connected. The random-coding exponent is achieved using a stacked binning scheme and a maximum penalized mutual information decoder, which may be thought of as an empirical generalized Maximum a Posteriori decoder. For channels with arbitrary memory, the random-coding exponents are larger than their CDMC counterparts. Applications of this study include watermarking, data hiding, communication in presence of partially known interferers, and problems such as broadcast channels, all of which involve the fundamental idea of binning. | Capacity and Random-Coding Exponents for Channel Coding with Side Information |
We study quantum networks with tree structures, where information propagates from a root to leaves: at each node in the network, the received qubit unitarily interacts with fresh ancilla qubits, and then each qubit is sent through a noisy channel to a different node in the next level. As the tree's depth grows, there is a competition between the decay of quantum information due to the noisy channels and the additional protection against noise that is achieved by further delocalizing information. In the classical setting, where each node just copies the input bit into multiple output bits, this model has been studied as the broadcasting or reconstruction problem on trees, which has broad applications. In this work, we study the quantum version of this problem, where the encoder at each node is a Clifford unitary that encodes the input qubit in a stabilizer code. Such noisy quantum trees, for instance, provide a useful model for understanding the effect of noise within the encoders of concatenated codes. We prove that above certain noise thresholds, which depend on the properties of the code such as its distance, as well as the properties of the encoder, information decays exponentially with the depth of the tree. On the other hand, by studying certain efficient decoders, we prove that for codes with distance d>=2 and for sufficiently small (but non-zero) noise, classical information and entanglement propagate over a noisy tree with infinite depth. Indeed, we find that this remains true even for binary trees with certain 2-qubit encoders at each node, which encode the received qubit in the binary repetition code with distance d=1. | Propagation of Quantum Information in Tree Networks: Noise Thresholds for Infinite Propagation |
We employ a recently introduced structured input-output analysis (SIOA) approach to analyze streamwise and spanwise wavelengths of flow structures in stably stratified plane Couette flow. In the low-Reynolds number ($Re$) low-bulk Richardson number ($Ri_b$) spatially intermittent regime, we demonstrate that SIOA predicts high amplification associated with wavelengths corresponding to the characteristic oblique turbulent bands in this regime. SIOA also identifies quasi-horizontal flow structures resembling the turbulent-laminar layers commonly observed in the high-$Re$ high-$Ri_b$ intermittent regime. An SIOA across a range of $Ri_b$ and $Re$ values suggests that the classical Miles-Howard stability criterion ($Ri_b\leq 1/4$) is associated with a change in the most amplified flow structures when Prandtl number is close to one ($Pr\approx 1$). However, for $Pr\ll 1$, the most amplified flow structures are determined by the product $PrRi_b$. For $Pr\gg 1$, SIOA identifies another quasi-horizontal flow structure that we show is principally associated with density perturbations. We further demonstrate the dominance of this density-associated flow structure in the high $Pr$ limit by constructing analytical scaling arguments for the amplification in terms of $Re$ and $Pr$ under the assumptions of unstratified flow (with $Ri_b=0$) and streamwise invariance. | Structured input-output analysis of stably stratified plane Couette flow |
Gravitational lensing is a potentially powerful tool for elucidating the origin of gamma-ray emission from distant sources. Cosmic lenses magnify the emission from distance sources and produce time delays between mirage images. Gravitationally-induced time delays depend on the position of the emitting regions in the source plane. The Fermi/LAT satellite continuously monitors the entire sky and detects gamma-ray flares, including those from gravitationally-lensed blazars. Therefore, temporal resolution at gamma-ray energies can be used to measure these time delays, which, in turn, can be used to resolve the origin of the gamma-ray flares spatially. We provide a guide to the application and Monte Carlo simulation of three techniques for analyzing these unresolved light curves: the Autocorrelation Function, the Double Power Spectrum, and the Maximum Peak Method. We apply these methods to derive time delays from the gamma-ray light curve of the gravitationally-lensed blazar PKS 1830-211. The result of temporal analysis combined with the properties of the lens from radio observations yield an improvement in spatial resolution at gamma-ray energies by a factor of 10000. We analyze four active periods. For two of these periods, the emission is consistent with origination from the core and for the other two, the data suggest that the emission region is displaced from the core by more that ~1.5 kpc. For the core emission, the gamma-ray time delays, $23\pm0.5$ days and $19.7\pm1.2$ days, are consistent with the radio time delay $26^{+4}_{-5}$ days. | Resolving the High Energy Universe with Strong Gravitational Lensing: The Case of PKS 1830-211 |
The Majorana code is an example of a stabilizer code where the quantum information is stored in a system supporting well-separated Majorana Bound States (MBSs). We focus on one-dimensional realizations of the Majorana code, as well as networks of such structures, and investigate their lifetime when coupled to a parity-preserving thermal environment. We apply the Davies prescription, a standard method that describes the basic aspects of a thermal environment, and derive a master equation in the Born-Markov limit. We first focus on a single wire with immobile MBSs and perform error correction to annihilate thermal excitations. In the high-temperature limit, we show both analytically and numerically that the lifetime of the Majorana qubit grows logarithmically with the size of the wire. We then study a trijunction with four MBSs when braiding is executed. We study the occurrence of dangerous error processes that prevent the lifetime of the Majorana code from growing with the size of the trijunction. The origin of the dangerous processes is the braiding itself, which separates pairs of excitations and renders the noise nonlocal; these processes arise from the basic constraints of moving MBSs in 1D structures. We confirm our predictions with Monte Carlo simulations in the low-temperature regime, i.e. the regime of practical relevance. Our results put a restriction on the degree of self-correction of this particular 1D topological quantum computing architecture. | Monte Carlo studies of the properties of the Majorana quantum error correction code: is self-correction possible during braiding? |
The extremely regular, periodic radio emission from millisecond pulsars makes them useful tools for studying neutron star astrophysics, general relativity, and low-frequency gravitational waves. These studies require that the observed pulse times of arrival be fit to complex timing models that describe numerous effects such as the astrometry of the source, the evolution of the pulsar's spin, the presence of a binary companion, and the propagation of the pulses through the interstellar medium. In this paper, we discuss the benefits of using Bayesian inference to obtain pulsar timing solutions. These benefits include the validation of linearized least-squares model fits when they are correct, and the proper characterization of parameter uncertainties when they are not; the incorporation of prior parameter information and of models of correlated noise; and the Bayesian comparison of alternative timing models. We describe our computational setup, which combines the timing models of Tempo2 with the nested-sampling integrator MultiNest. We compare the timing solutions generated using Bayesian inference and linearized least-squares for three pulsars: B1953+29, J2317+1439, and J1640+2224, which demonstrate a variety of the benefits that we posit. | Bayesian inference for pulsar timing models |
Two dimensional (2D) superconducting systems are of great importance to exploring exotic quantum physics. Recent development of fabrication techniques stimulates the studies of high quality single crystalline 2D superconductors, where intrinsic properties give rise to unprecedented physical phenomena. Here we report the observation of Zeeman-type spin-orbit interaction protected superconductivity (Zeeman-protected superconductivity) in 4 monolayer (ML) to 6 ML crystalline Pb films grown on striped incommensurate (SIC) Pb layers on Si(111) substrates by molecular beam epitaxy (MBE). Anomalous large in-plane critical field far beyond the Pauli limit is detected, which can be attributed to the Zeeman-protected superconductivity due to the in-plane inversion symmetry breaking at the interface. Our work demonstrates that in superconducting heterostructures the interface can induce Zeeman-type spin-orbit interaction (SOI) and modulate the superconductivity. | Interface induced Zeeman-protected superconductivity in ultrathin crystalline lead films |
The DGP-model with additional terms in the action is considered. These terms have a special form and include auxiliary scalar fields without kinetic terms, which are non-minimally coupled to gravity. The use of these fields allows one to exclude the mode, which corresponds to the strong coupling effect, from the theory. Effective four-dimensional theory on the brane appears to be the same, as in the original DGP-model. | The strong coupling effect and auxiliary fields in the DGP-model |
In sponsored search advertising, keywords serve as an essential bridge linking advertisers, search users and search engines. Advertisers have to deal with a series of keyword decisions throughout the entire lifecycle of search advertising campaigns. This paper proposes a multi-level and closed-form computational framework for keyword optimization (MKOF) to support various keyword decisions. Based on this framework, we develop corresponding optimization strategies for keyword targeting, keyword assignment and keyword grouping at different levels (e.g., market, campaign and adgroup). With two real-world datasets obtained from past search advertising campaigns, we conduct computational experiments to evaluate our keyword optimization framework and instantiated strategies. Experimental results show that our method can approach the optimal solution in a steady way, and it outperforms two baseline keyword strategies commonly used in practice. The proposed MKOF framework also provides a valid experimental environment to implement and assess various keyword strategies in sponsored search advertising. | Keyword Optimization in Sponsored Search Advertising: A Multi-Level Computational Framework |
All computation is physically embedded. Reflecting this, a growing body of results embraces rate equations as the underlying mechanics of thermodynamic computation and biological information processing. Strictly applying the implied continuous-time Markov chains, however, excludes a universe of natural computing. We show that expanding the toolset to continuous-time hidden Markov chains substantially removes the constraints. The general point is made concrete by our analyzing two eminently-useful computations that are impossible to describe with a set of rate equations over the memory states. We design and analyze a thermodynamically-costless bit flip, providing a first counterexample to rate-equation modeling. We generalize this to a costless Fredkin gate---a key operation in reversible computing that is computation universal. Going beyond rate-equation dynamics is not only possible, but necessary if stochastic thermodynamics is to become part of the paradigm for physical information processing. | Non-Markovian Momentum Computing: Universal and Efficient |
We construct the catalogues of standard sirens (StS) based on the future gravitational wave (GW) detector networks, i.e., the second-generation ground-based advanced LIGO+advanced Virgo+KAGRA+LIGO-India (HLVKI), the third-generation ground-based Einstein Telescope+two Cosmic Explorer (ET+2CE), and the space-based LISA+Taiji. From the corresponding electromagnetic (EM) counterpart detectors for each networks, we sample the joint GW+EM detections from the probability to construct the Hubble diagram of standard sirens for 10 years detections of HLVKI, 5 years detections of ET+2CE, and 5 years of detections of LISA+Taiji, which we estimate would be available and released in the 2030s. Thus we construct a combined Hubble diagram from these ground and spaced-based detector networks to explore the expansion history of our Universe from redshift 0 to 7. We give a conservative and realistic estimation of the catalogue and Hubble diagram of GW standard sirens and their potential on studying cosmology and modified gravity theory in the 2030s. We adopt two strategies for the forecasts. One is the traditional model-fitting Markov-Chain Monte-Carlo method (MCMC). The results show that the combined StS alone can constrain the Hubble constant at the precision level of $0.34\%$, 1.76 times more tightly than the current most precise measurement from \textit{Planck}+BAO+Pantheon. The joint StS with current EM experiments will improve the constraints of cosmological parameters significantly. The modified gravity theory can be constrained with $0.46\%$ error from the GW propagation. In the second strategy, we use the machine-learning nonparametric reconstruction techniques, i.e., the Gaussian process (GP) with the Artificial Neural Networks (ANN) as a comparison. GP reconstructions can give comparable results with MCMC. We anticipate more works and research on these topics. | Gravitational-Wave Detector Networks: Standard Sirens on Cosmology and Modified Gravity Theory |
This survey on the topology of Stein manifolds is an extract from our recent joint book. It is compiled from two short lecture series given by the first author in 2012 at the Institute for Advanced Study, Princeton, and the Alfred Renyi Institute of Mathematics, Budapest. | Stein structures: existence and flexibility |
Efficient prediction of internet traffic is essential for ensuring proactive management of computer networks. Nowadays, machine learning approaches show promising performance in modeling real-world complex traffic. However, most existing works assumed that model training and evaluation data came from identical distribution. But in practice, there is a high probability that the model will deal with data from a slightly or entirely unknown distribution in the deployment phase. This paper investigated and evaluated machine learning performances using eXtreme Gradient Boosting, Light Gradient Boosting Machine, Stochastic Gradient Descent, Gradient Boosting Regressor, CatBoost Regressor, and their stacked ensemble model using data from both identical and out-of distribution. Also, we proposed a hybrid machine learning model integrating wavelet decomposition for improving out-of-distribution prediction as standalone models were unable to generalize very well. Our experimental results show the best performance of the standalone ensemble model with an accuracy of 96.4%, while the hybrid ensemble model improved it by 1% for in-distribution data. But its performance dropped significantly when tested with three different datasets having a distribution shift than the training set. However, our proposed hybrid model considerably reduces the performance gap between identical and out-of-distribution evaluation compared with the standalone model, indicating the decomposition technique's effectiveness in the case of out-of-distribution generalization. | Wavelet-Based Hybrid Machine Learning Model for Out-of-distribution Internet Traffic Prediction |
Piontkowski calculated the Euler number of Jacobi factors of plane curve singularities with semigroups $< p, q>$, $< 4, 2q, s>$, $< 6,8,s>$ and $< 6,10, s>$. %His analysis was done by decomposing the Jacobi factors into affine cells. In this paper, we show that a Jacobi factor for any curve singularity admits a cell decomposition by virtue of Pfister and Steenbrink's theory for punctual Hilbert schemes. We also introduce a computational method to determine the number of affine cells in the decomposition. Applying it, we compute the the Euler number of the Jacobi factor of a singularity with a semigroup $< 4,6,13>$. Our result gives a counterexample for Piontkowski's calculation. | The Euler number of the Jacobi factor of a plane curve singularity whose semigroup is $\langle4,6,13\rangle$ |
Let (e^{tA})_{t \geq 0} be a C_0-contraction semigroup on a 2-smooth Banach space E, let (W_t)_{t \geq 0} be a cylindrical Brownian motion in a Hilbert space H, and let (g_t)_{t \geq 0} be a progressively measurable process with values in the space \gamma(H,E) of all \gamma-radonifying operators from H to E. We prove that for all 0<p<\infty there exists a constant C, depending only on p and E, such that for all T \geq 0 we have \E \sup_{0\le t\le T} || \int_0^t e^{(t-s)A} g_s dW_s \ ||^p \leq C \mathbb{E} (\int_0^T || g_t ||_{\gamma(H,E)}^2 dt)^\frac{p}{2}. For p \geq 2 the proof is based on the observation that \psi(x) = || x ||^p is Fr\'echet differentiable and its derivative satisfies the Lipschitz estimate || \psi'(x) - \psi'(y)|| \leq C(|| x || + || y ||)^{p-2} || x-y ||; the extension to 0<p<2 proceeds via Lenglart's inequality. | A maximal inequality for stochastic convolutions in 2-smooth Banach spaces |
We report on dynamical quantum transport simulations for realistic molecular devices based on an approximate formulation of time-dependent Density Functional Theory with open boundary conditions. The method allows for the computation of various properties of junctions that are driven by alternating bias voltages. Besides the ac conductance for hexene connected to gold leads via thiol anchoring groups, we also investigate higher harmonics in the current for a benzenedithiol device. Comparison to a classical quasi-static model reveals that quantum effects may become important already for small ac bias and that the full dynamical simulations exhibit a much lower number of higher harmonics. Current rectification is also briefly discussed. | Higher harmonics and ac transport from time dependent density functional theory |
Semantic outdoor scene understanding based on 3D LiDAR point clouds is a challenging task for autonomous driving due to the sparse and irregular data structure. This paper takes advantages of the uneven range distribution of different LiDAR laser beams to propose a range aware instance segmentation network, RangeSeg. RangeSeg uses a shared encoder backbone with two range dependent decoders. A heavy decoder only computes top of a range image where the far and small objects locate to improve small object detection accuracy, and a light decoder computes whole range image for low computational cost. The results are further clustered by the DBSCAN method with a resolution weighted distance function to get instance-level segmentation results. Experiments on the KITTI dataset show that RangeSeg outperforms the state-of-the-art semantic segmentation methods with enormous speedup and improves the instance-level segmentation performance on small and far objects. The whole RangeSeg pipeline meets the real time requirement on NVIDIA\textsuperscript{\textregistered} JETSON AGX Xavier with 19 frames per second in average. | RangeSeg: Range-Aware Real Time Segmentation of 3D LiDAR Point Clouds |
Color is one of the main visual channels used for highlighting elements of interest in visualization. However, in multi-class scatterplots, color highlighting often comes at the expense of degraded color discriminability. In this paper, we argue for context-preserving highlighting during the interactive exploration of multi-class scatterplots to achieve desired pop-out effects, while maintaining good perceptual separability among all classes and consistent color mapping schemes under varying points of interest. We do this by first generating two contrastive color mapping schemes with large and small contrasts to the background. Both schemes maintain good perceptual separability among all classes and ensure that when colors from the two palettes are assigned to the same class, they have a high color consistency in color names. We then interactively combine these two schemes to create a dynamic color mapping for highlighting different points of interest. We demonstrate the effectiveness through crowd-sourced experiments and case studies. | Interactive Context-Preserving Color Highlighting for Multiclass Scatterplots |
The present paper develops recursive algorithms to track shifts in the resonance frequency of linear systems in real time. To date, automatic resonance tracking has been limited to non-model-based approaches, which rely solely on the phase difference between a specific input and output of the system. Instead, we propose a transformation of the system into a complex-valued representation, which allows us to abstract the resonance shifts as an exogenous disturbance acting on the excitation frequency, perturbing the excitation frequency from the natural frequency of the plant. We then discuss the resonance tracking task in two parts: recursively identifying the frequency disturbance and incorporating an update of the excitation frequency in the algorithm. The complex representation of the system simplifies the design of resonance tracking algorithms due to the applicability of well-established techniques. We discuss the stability of the proposed scheme, even in cases that seriously challenge current phase-based approaches, such as nonmonotonic phase differences and multiple-input multiple-output systems. Numerical simulations further demonstrate the performance of the proposed resonance tracking scheme. | Model-based resonance tracking of linear systems |
In this work, we study the inverse problem of recovering a potential coefficient in the subdiffusion model, which involves a Djrbashian-Caputo derivative of order $\alpha\in(0,1)$ in time, from the terminal data. We prove that the inverse problem is locally Lipschitz for small terminal time, under certain conditions on the initial data. This result extends the result in Choulli and Yamamoto (1997) for the standard parabolic case to the fractional case. The analysis relies on refined properties of two-parameter Mittag-Leffler functions, e.g., complete monotonicity and asymptotics. Further, we develop an efficient and easy-to-implement algorithm for numerically recovering the coefficient based on (preconditioned) fixed point iteration and Anderson acceleration. The efficiency and accuracy of the algorithm is illustrated with several numerical examples. | An Inverse Potential Problem for Subdiffusion: Stability and Reconstruction |
Domain adaptation is a common problem in robotics, with applications such as transferring policies from simulation to real world and lifelong learning. Performing such adaptation, however, requires informative data about the environment to be available during the adaptation. In this paper, we present domain curiosity -- a method of training exploratory policies that are explicitly optimized to provide data that allows a model to learn about the unknown aspects of the environment. In contrast to most curiosity methods, our approach explicitly rewards learning, which makes it robust to environment noise without sacrificing its ability to learn. We evaluate the proposed method by comparing how much a model can learn about environment dynamics given data collected by the proposed approach, compared to standard curious and random policies. The evaluation is performed using a toy environment, two simulated robot setups, and on a real-world haptic exploration task. The results show that the proposed method allows data-efficient and accurate estimation of dynamics. | Domain Curiosity: Learning Efficient Data Collection Strategies for Domain Adaptation |
We study the two-dimensional Anisotropic KPZ equation (AKPZ) formally given by \begin{equation*} \partial_t H=\frac12\Delta H+\lambda((\partial_1 H)^2-(\partial_2 H)^2)+\xi\,, \end{equation*} where $\xi$ is a space-time white noise and $\lambda$ is a strictly positive constant. While the classical two-dimensional KPZ equation, whose nonlinearity is $|\nabla H|^2=(\partial_1 H)^2+(\partial_2 H)^2$, can be linearised via the Cole-Hopf transformation, this is not the case for AKPZ. We prove that the stationary solution to AKPZ (whose invariant measure is the Gaussian Free Field) is superdiffusive: its diffusion coefficient diverges for large times as $\sqrt{\log t}$ up to $\log\log t$ corrections, in a Tauberian sense. Morally, this says that the correlation length grows with time like $t^{1/2}\times (\log t)^{1/4}$. Moreover, we show that if the process is rescaled diffusively ($t\to t/\varepsilon^2, x\to x/\varepsilon, \varepsilon\to0$), then it evolves non-trivially already on time-scales of order approximately $1/\sqrt{|\log\varepsilon|}\ll1$. Both claims hold as soon as the coefficient $\lambda$ of the nonlinearity is non-zero. These results are in contrast with the belief, common in the mathematics community, that the AKPZ equation is diffusive at large scales and, under simple diffusive scaling, converges the two-dimensional Stochastic Heat Equation (2dSHE) with additive noise (i.e. the case $\lambda=0$). | The stationary AKPZ equation: logarithmic superdiffusivity |
Hard X-rays observed in Active Galactic Nuclei (AGNs) are thought to originate from the Comptonization of the optical/UV accretion disk photons in a hot corona. Polarization studies of these photons can help to constrain the corona geometry and the plasma properties. We have developed a ray-tracing code that simulates the Comptonization of accretion disk photons in coronae of arbitrary shape, and use it here to study the polarization of the X-ray emission from wedge and spherical coronae. We study the predicted polarization signatures for the fully relativistic and various approximate treatments of the elemental Compton scattering processes. We furthermore use the code to evaluate the impact of non-thermal electrons and cyclo-synchrotron photons on the polarization properties. Finally, we model the NuSTAR observations of the Seyfert I galaxy Mrk 335 and predict the associated polarization signal. Our studies show that X-ray polarimetry missions such as NASA's Imaging X-ray Polarimetry Explorer (IXPE) and the X-ray Imaging Polarimetry Explorer (XIPE) proposed to ESA will provide valuable new information about the physical properties of the plasma close to the event horizon of AGN black holes. | The X-ray Polarization of the Accretion Disk Coronae of Active Galactic Nuclei |
Despite the recent success of long-tailed object detection, almost all long-tailed object detectors are developed based on the two-stage paradigm. In practice, one-stage detectors are more prevalent in the industry because they have a simple and fast pipeline that is easy to deploy. However, in the long-tailed scenario, this line of work has not been explored so far. In this paper, we investigate whether one-stage detectors can perform well in this case. We discover the primary obstacle that prevents one-stage detectors from achieving excellent performance is: categories suffer from different degrees of positive-negative imbalance problems under the long-tailed data distribution. The conventional focal loss balances the training process with the same modulating factor for all categories, thus failing to handle the long-tailed problem. To address this issue, we propose the Equalized Focal Loss (EFL) that rebalances the loss contribution of positive and negative samples of different categories independently according to their imbalance degrees. Specifically, EFL adopts a category-relevant modulating factor which can be adjusted dynamically by the training status of different categories. Extensive experiments conducted on the challenging LVIS v1 benchmark demonstrate the effectiveness of our proposed method. With an end-to-end training pipeline, EFL achieves 29.2% in terms of overall AP and obtains significant performance improvements on rare categories, surpassing all existing state-of-the-art methods. The code is available at https://github.com/ModelTC/EOD. | Equalized Focal Loss for Dense Long-Tailed Object Detection |
We present a theory of unbinding transitions for membranes that interact via short and long receptor/ligand bonds. The detail of unbinding behavior of the membranes is governed by the binding energies and concentrations of receptors and ligands. We investigate the unbinding behavior of these membranes with Monte Carlo simulations and via a comparison with strings. We derive the scaling laws for strings analytically. The exact analytic results provide scaling estimate for membranes in the vicinity of the critical point. | Unbinding transitions of multicomponent membranes and strings |
The gas-solid budget of carbon in protoplanetary disks is related to the composition of the cores and atmospheres of the planets forming in them. The key gas-phase carbon carriers CO, C$^{0}$ and C$^{+}$ can now be observed in disks. The gas-phase carbon abundance in disks has not yet been well characterized, we aim to obtain new constraints on the [C]/[H] ratio in a sample of disks, and to get an overview of the strength of [CI] and warm CO emission. We carried out a survey of the CO$\,6$--$5$ and [CI]$\,1$--$0$ and $2$--$1$ lines towards $37$ disks with APEX, and supplemented it with [CII] data from the literature. The data are interpreted using a grid of models produced with the DALI code. We also investigate how well the gas-phase carbon abundance can be determined in light of parameter uncertainties. The CO$\,6$--$5$ line is detected in $13$ out of $33$ sources, the [CI]$\,1$--$0$ in $6$ out of $12$, and the [CI]$\,2$--$1$ in $1$ out of $33$. With deep integrations, the first unambiguous detections of [CI]~$1$--$0$ in disks are obtained, in TW~Hya and HD~100546. Gas-phase carbon abundance reductions of a factor $5$--$10$ or more can be identified robustly based on CO and [CI] detections. The atomic carbon detection in TW~Hya confirms a factor $100$ reduction of [C]/[H]$_{\rm gas}$ in that disk, while the data are consistent with an ISM-like carbon abundance for HD~100546. In addition, BP~Tau, T~Cha, HD~139614, HD~141569, and HD~100453 are either carbon-depleted or gas-poor disks. The low [CI]~$2$--$1$ detection rates in the survey mostly reflect insufficient sensitivity to detect T~Tauri disks. The Herbig~Ae/Be disks with CO and [CII] upper limits below the models are debris disk like systems. A roughly order of magnitude increase in sensitivity compared to our survey is required to obtain useful constraints on the gas-phase [C]/[H] ratio in most of the targeted systems. | Observations and modelling of CO and [CI] in disks. First detections of [CI] and constraints on the carbon abundance |
Hyperproperties generalize trace properties by expressing relations between multiple computations. Hyperpropertes include policies from information-flow security, like observational determinism or non-interference, and many other system properties including promptness and knowledge. In this paper, we give an overview on the model checking problem for temporal hyperlogics. Our starting point is the model checking algorithm for HyperLTL, a reduction to B\"uchi automata emptiness. This basic construction can be extended with propositional quantification, resulting in an algorithm for HyperQPTL. It can also be extended with branching time, resulting in an algorithm for HyperCTL*. However, it is not possible to have both extensions at the same time: the model checking problem of HyperQCTL* is undecidable. An attractive compromise is offered by MPL[E], i.e., monadic path logic extended with the equal-level predicate. The expressiveness of MPL[E] falls strictly between that of HyperCTL* and HyperQCTL*. MPL[E] subsumes both HyperCTL* and HyperKCTL*, the extension of HyperCTL* with the knowledge operator. We show that the model checking problem for MPL[E] is still decidable. | Model Checking Algorithms for Hyperproperties |
We prove that the punctured generalized conifolds and punctured orbifolded conifolds are mirror symmetric under the SYZ program with quantum corrections. This mathematically confirms the gauge-theoretic prediction by Aganagic-Karch-L\"ust-Miemiec, and also provides a supportive evidence to Morrison's conjecture that geometric transitions are reversed under mirror symmetry. | Geometric transitions and SYZ mirror symmetry |
Durgapal's fifth isotropic solution describing spherically symmetric and static matter distribution is extended to an anisotropic scenario. To do so we employ the gravitational decoupling through the minimal geometric deformation scheme. This approach allows to split Einstein's field equations in two simply set of equations, one corresponding to the isotropic sector and other to the anisotropic sector described by an extra gravitational source. The isotropic sector is solved by the Dugarpal's model and the anisotropic sector is solved once a suitable election on the minimal geometric deformation is imposes. The obtained model is representing some strange stars candidates and fulfill all the requirements in order to be a well behaved physical solution to the Einstein's field equations. | Compact Anisotropic Models in General Relativity by Gravitational Decoupling |
State readout of trapped-ion qubits with trap-integrated detectors can address important challenges for scalable quantum computing, but the strong rf electric fields used for trapping can impact detector performance. Here, we report on NbTiN superconducting nanowire single-photon detectors (SNSPDs) employing grounded aluminum mirrors as electrical shielding that are integrated into linear surface-electrode rf ion traps. The shielded SNSPDs can be successfully operated at applied rf trapping potentials of up to $\mathrm{54\,V_{peak}}$ at $\mathrm{70\,MHz}$ and temperatures of up to $\mathrm{6\,K}$, with a maximum system detection efficiency of $\mathrm{68\,\%}$. This performance should be sufficient to enable parallel high-fidelity state readout of a wide range of trapped ion species in typical cryogenic apparatus. | Trap-Integrated Superconducting Nanowire Single-Photon Detectors with Improved RF Tolerance for Trapped-Ion Qubit State Readout |
In practical conjugate gradient (CG) computations it is important to monitor the quality of the approximate solution to $Ax=b$ so that the CG algorithm can be stopped when the required accuracy is reached. The relevant convergence characteristics, like the $A$-norm of the error or the normwise backward error, cannot be easily computed. However, they can be estimated. Such estimates often depend on approximations of the smallest or largest eigenvalue of~$A$. In the paper we introduce a new upper bound for the $A$-norm of the error, which is closely related to the Gauss-Radau upper bound, and discuss the problem of choosing the parameter $\mu$ which should represent a lower bound for the smallest eigenvalue of $A$.The new bound has several practical advantages, the most important one is that it can be used as an approximation to the $A$-norm of the error even if $\mu$ is not exactly a lower bound for the smallest eigenvalue of $A$. In this case, $\mu$ can be chosen, e.g., as the smallest Ritz value or its approximation. We also describe a very cheap algorithm, based on the incremental norm estimation technique, which allows to estimate the smallest and largest Ritz values during the CG computations. An improvement of the accuracy of these estimates of extreme Ritz values is possible, at the cost of storing the CG coefficients and solving a linear system with a tridiagonal matrix at each CG iteration. Finally, we discuss how to cheaply approximate the normwise backward error. The numerical experiments demonstrate the efficiency of the estimates of the extreme Ritz values, and show their practical use in error estimation in CG. | Approximating the extreme Ritz values and upper bounds for the $A$-norm of the error in CG |
We report neutron star predictions based on our most recent equations of state. These are derived from chiral effective field theory, which allows for a systematic development of nuclear forces, order by order. We utilize high-quality two-nucleon interactions and include all three-nucleon forces up to fourth order in the chiral expansion. Our ab initio predictions are restricted to the domain of applicability of chiral effective field theory. However, stellar matter in the interior of neutron stars can be up to several times denser than normal nuclear matter at saturation, and its composition is essentially unknown. Following established practices, we extend our microscopic predictions to higher densities matching piecewise polytropes. The radius of the average-size neutron star, about 1.4 solar masses, is sensitive to the pressure at normal densities, and thus it is suitable to constrain ab initio theories of the equation of state. For this reason, we focus on the radius of medium-mass stars. We compare our results with other theoretical predictions and recent constraints. | The Equation of State of Neutron-Rich Matter at Fourth Order of Chiral Effective Field Theory and the Radius of a Medium-Mass Neutron Star |
Newton-Cartan manifolds and the Galilei group are defined by the use of co-rank one degenerate metric tensor. Newton-Cartan connection is lifted to the degenerate spinor bundle over a Newton-Cartan 4-manifold by the aid of degenerate spin group. Levy-Leblond equation is constructed with the lifted connection. | Degenerate Spin Structures and the Levy-Leblond Equation |
This paper provides a finite sample bound for the error term in the Edgeworth expansion for a sum of independent, potentially discrete, nonlattice random vectors, using a uniform-in-$P$ version of the weaker Cram\'{e}r condition in Angst and Poly (2017). This finite sample bound can be used to derive an Edgeworth expansion that is uniform over the distributions of the random vectors. Using this result, we derive a uniform-in-$P$ higher order expansion of resampling-based distributions. | A Uniform-in-$P$ Edgeworth Expansion under Weak Cram\'{e}r Conditions |
This paper concerns Kalman filtering when the measurements of the process are censored. The censored measurements are addressed by the Tobit model of Type I and are one-dimensional with two censoring limits, while the (hidden) state vectors are multidimensional. For this model, Bayesian estimates for the state vectors are provided through a recursive algorithm of Kalman filtering type. Experiments are presented to illustrate the effectiveness and applicability of the algorithm. The experiments show that the proposed method outperforms other filtering methodologies in minimizing the computational cost as well as the overall Root Mean Square Error (RMSE) for synthetic and real data sets. | Kalman Filtering With Censored Measurements |
We describe the short-distance properties of the spacetime of a system of D-particles by viewing their matrix-valued coordinates as coupling constants of a deformed worldsheet $\sigma$-model. We show that the Zamolodchikov metric on the associated moduli space naturally encodes properties of the non-abelian dynamics, and from this we derive new spacetime uncertainty relations directly from the quantum string theory. The non-abelian uncertainties exhibit decoherence effects which suggest the interplay of quantum gravity in multiple D-particle dynamics. | Spacetime Quantization from Non-abelian D-particle Dynamics |
Federated Learning (FL) has become an active and promising distributed machine learning paradigm. As a result of statistical heterogeneity, recent studies clearly show that the performance of popular FL methods (e.g., FedAvg) deteriorates dramatically due to the client drift caused by local updates. This paper proposes a novel Federated Learning algorithm (called IGFL), which leverages both Individual and Group behaviors to mimic distribution, thereby improving the ability to deal with heterogeneity. Unlike existing FL methods, our IGFL can be applied to both client and server optimization. As a by-product, we propose a new attention-based federated learning in the server optimization of IGFL. To the best of our knowledge, this is the first time to incorporate attention mechanisms into federated optimization. We conduct extensive experiments and show that IGFL can significantly improve the performance of existing federated learning methods. Especially when the distributions of data among individuals are diverse, IGFL can improve the classification accuracy by about 13% compared with prior baselines. | Behavior Mimics Distribution: Combining Individual and Group Behaviors for Federated Learning |
We have constructed a holographic superfluid with gauge-axion coupling. Depending on whether the coupling is positive or negative, the system displays metallic or insulating behavior in its normal state. A significant feature of the system is the appearance of a mid-IR peak in the alternating current (AC) conductivity in a certain range of parameters. This peak arises due to competition between explicit symmetry breaking (ESB) and spontaneous symmetry breaking (SSB), which results in the presence of a pseudo-Goldstone mode. Moreover, a dip in low-frequency AC conductivity is observed, stemming from the excitation of the SSB Goldstone mode. In the superfluid phase, the effect of gauge-axion coupling on the condensation or superfluid energy gap is only amplified in the presence of strong momentum dissipation. Notably, for the case with negative gauge-axion coupling, a hard-gap-like behavior at low frequency and a pronounced peak at intermediate frequency are observed, indicating that the evolution of the superfluid component is distinct from that of positive coupling. | Holographic superfluid with gauge-axion coupling |
In the top-down approach to multi-name credit modeling, calculation of singe name sensitivities appears possible, at least in principle, within the so-called random thinning (RT) procedure which dissects the portfolio risk into individual contributions. We make an attempt to construct a practical RT framework that enables efficient calculation of single name sensitivities in a top-down framework, and can be extended to valuation and risk management of bespoke tranches. Furthermore, we propose a dynamic extension of the RT method that enables modeling of both idiosyncratic and default-contingent individual spread dynamics within a Monte Carlo setting in a way that preserves the portfolio "top"-level dynamics. This results in a model that is not only calibrated to tranche and single name spreads, but can also be tuned to approximately match given levels of spread volatilities and correlations of names in the portfolio. | Climbing Down from the Top: Single Name Dynamics in Credit Top Down Models |
BaFe2Se3 (Pnma, CsAg2I3-type structure), recently assumed to show superconductivity at ~ 11 K, exhibits a pressure-dependent structural transition to the CsCu2Cl3-type structure (Cmcm space group) around 60 kbar, as evidenced from pressure-dependent synchrotron powder diffraction data. Temperature-dependent synchrotron powder diffraction data indicate an evolution of the room-temperature BaFe2Se3 structure towards a high symmetry CsCu2Cl3 form upon heating. Around 425 K BaFe2Se3 undergoes a reversible, first order isostructural transition, that is supported by the differential scanning calorimetry data. The temperature-dependent structural changes occur in two stages, as determined by the alignment of the FeSe4 tetrahedra and corresponding adjustments of the positions of Ba atoms. On further heating, a second order phase transformation into the Cmcm structure is observed at 660 K. A rather unusual combination of isostructural and second-order phase transformations is parameterized within phenomenological theory assuming high-order expansion of Landau potential. A generic phase diagram mapping observed structures is proposed on the basis of the parameterization. | Crystal Structure of BaFe2Se3 as a Function of Temperature and Pressure: Phase Transition Phenomena and High-Order Expansion of Landau Potential |
The spatial profiles and the dissipation characteristics of spin-wave quasi-eigenmodes are investigated in small magnetic Ni$_{81}$Fe$_{19}$ ring structures using Brillouin light scattering microscopy. It is found, that the decay constant of a mode decreases with increasing mode frequency. Indications for a contribution of three-magnon processes to the dissipation of higher-order spin-wave quasi-eigenmodes are found. | Dissipation characteristics of quantized spin waves in nano-scaled magnetic ring structures |
We propose a new test case prioritization technique that combines both mutation-based and diversity-based approaches. Our diversity-aware mutation-based technique relies on the notion of mutant distinguishment, which aims to distinguish one mutant's behavior from another, rather than from the original program. We empirically investigate the relative cost and effectiveness of the mutation-based prioritization techniques (i.e., using both the traditional mutant kill and the proposed mutant distinguishment) with 352 real faults and 553,477 developer-written test cases. The empirical evaluation considers both the traditional and the diversity-aware mutation criteria in various settings: single-objective greedy, hybrid, and multi-objective optimization. The results show that there is no single dominant technique across all the studied faults. To this end, \rev{we we show when and the reason why each one of the mutation-based prioritization criteria performs poorly, using a graphical model called Mutant Distinguishment Graph (MDG) that demonstrates the distribution of the fault detecting test cases with respect to mutant kills and distinguishment. | Empirical Evaluation of Mutation-based Test Prioritization Techniques |
We show that in automatically R-conserving minimal SUSY left-right symmetric models there is a theoretical upper limit on the mass of the right-handed $W_R$ boson given by $M_{W_R}\leq g M_{SUSY}/f$, where $M_{SUSY}$ is the scale of supersymmetry breaking, g is the weak gauge coupling and $f$ is the Yukawa coupling responsible for generating the right-handed neutrino masses. If $M_{W_R}$ violates the above limit, the ground state of the theory breaks electromagnetism. The only way to avoid this bound while keeping the theory automatically R-conserving is to expand the theory to include very specific kinds of additional multiplets and demanding unnatural finetuning of their couplings. | Upper bound on the $W_R$ mass in automatically R-conserving SUSY models |
The neutron skin of atomic nuclei impacts the structure of neutron-rich nuclei, the equation of state of nucleonic matter, and the size of neutron stars. Here we predict the neutron skin of selected light- and medium-mass nuclei using coupled-cluster theory and the auxiliary field diffusion Monte Carlo method with two- and three-nucleon forces from chiral effective field theory. We find a linear correlation between the neutron skin and the isospin asymmetry in agreement with the liquid-drop model and compare with data. We also extract the linear relationship that describes the difference between neutron and proton radii of mirror nuclei and quantify the effect of charge symmetry breaking terms in the nuclear Hamiltonian. Our results for the mirror-difference charge radii and binding energies per nucleon agree with existing data. | Trends of Neutron Skins and Radii of Mirror Nuclei from First Principles |
The concept of mobility is discussed in the case of unstrained and strained nanoscale DG MOSFET thanks to particle Monte Carlo device simulation. Without the introduction of specific scattering phenomenon for short channel devices, the apparent mobility extracted from simulated electrical characteristics decreases with the shrinking of the channel length, as experimentally observed elsewhere. We show that this reduction at room temperature is caused by non stationary effects. Moreover, both simulation results and experimental data may be well reproduced by a Mathiessen-like model, using a "ballistic mobility" extracted from MC simulations together with the usual long channel mobility. | Monte Carlo study of apparent mobility reduction in nano-MOSFETs |
Leibniz's Monadology mentions perceptional and sentimental variations of the individual in the city. It is the interaction of people with people and events. Film festivals are highly sentimental events of multicultural cities. Each movie has a different sentimental effect and the interactions with the movies have reflections that can be observed on social media. This analysis aims to apply distant reading on Berlinale tweets collected during the festival. On contrary to close reading, distant reading let authors to observe patterns in large collection of data. The analysis is temporal and sentimental in multilingual domain and strongly positive and negative time intervals are analysed. For this purpose, we trained a deep sentiment network with multilingual embeddings. These multilingual embeddings are aligned in latent space. We trained the network with a multilingual dataset in three languages English, German and Spanish. The trained algorithm has a 0.78 test score and applied on Tweets with Berlinale hashtag during the festival. Although the sentimental analysis does not reflect the award-winning films, we observe weekly routine on the relationship between sentimentality, which can mislead a close reading analysis. We have also remarks on popularity of the director or actors. | Multilingual, Temporal and Sentimental Distant-Reading of City Events |
Considered is quantum tunnelling in anisotropic spin systems in a magnetic field perpendicular to the anisotropy axis. In the domain of small field the problem of calculating tunnelling splitting of energy levels is reduced to constructing the perturbatio n series with degeneracy, the order of degeneracy being proportional to a spin value. Partial summation of this series taking into account ''dangerous terms'' with small denominators is performed and the value of tunnelling splitting is calculated with allowance for the first correction with respect to a magnetic field. | Tunnelling series in terms of perturbation theory for quantum spin systems |
We study generalized free fields (GFF) from the point of view of information measures. We first review conformal GFF, their holographic representation, and the ambiguities in the assignation of algebras to regions that arise in these theories. Then we study the mutual information (MI) in several geometric configurations. The MI displays unusual features at the short distance limit: a leading volume term rather than an area term, and a logarithmic term in any dimensions rather than only for even dimensions as in ordinary CFT's. We find the dependence of some subleading terms on the conformal dimension $\Delta$ of the GFF. We study the long distance limit of the MI for regions with boundary in the null cone. The pinching limit of these surfaces show the GFF behaves as an interacting model from the MI point of view. The pinching exponents depend on the choice of algebra. The entanglement wedge algebra choice allows these models to ``fake'' causality, giving results consistent with its role in the description of large $N$ models. | Mutual Information of Generalized Free Fields |
We present a multi-level solver for drawing constrained Gaussian realizations or finding the maximum likelihood estimate of the CMB sky, given noisy sky maps with partial sky coverage. The method converges substantially faster than existing Conjugate Gradient (CG) methods for the same problem. For instance, for the 143 GHz Planck frequency channel, only 3 multi-level W-cycles result in an absolute error smaller than 1 microKelvin in any pixel. Using 16 CPU cores, this translates to a computational expense of 6 minutes wall time per realization, plus 8 minutes wall time for a power spectrum-dependent precomputation. Each additional W-cycle reduces the error by more than an order of magnitude, at an additional computational cost of 2 minutes. For comparison, we have never been able to achieve similar absolute convergence with conventional CG methods for this high signal-to-noise data set, even after thousands of CG iterations and employing expensive preconditioners. The solver is part of the Commander 2 code, which is available with an open source license at http://commander.bitbucket.org/. | A multi-level solver for Gaussian constrained CMB realizations |
In this paper, we give estimates for the speed of convergence towards a limiting stable law in the recently introduced setting of mod-$\phi$ convergence. Namely, we define a notion of zone of control, closely related to mod-$\phi$ convergence, and we prove estimates of Berry-Esseen type under this hypothesis. Applications include: the winding number of a planar Brownian motion; classical approximations of stable laws by compound Poisson laws; examples stemming from determinantal point processes (characteristic polynomials of random matrices and zeroes of random analytic functions); sums of variables with an underlying dependency graph (for which we recover a result of Rinott, obtained by Stein's method); the magnetization in the $d$-dimensional Ising model; and functionals of Markov chains. | Mod-$\phi$ convergence, II: Estimates on the speed of convergence |
Event detection has long been troubled by the \emph{trigger curse}: overfitting the trigger will harm the generalization ability while underfitting it will hurt the detection performance. This problem is even more severe in few-shot scenario. In this paper, we identify and solve the trigger curse problem in few-shot event detection (FSED) from a causal view. By formulating FSED with a structural causal model (SCM), we found that the trigger is a confounder of the context and the result, which makes previous FSED methods much easier to overfit triggers. To resolve this problem, we propose to intervene on the context via backdoor adjustment during training. Experiments show that our method significantly improves the FSED on ACE05, MAVEN and KBP17 datasets. | Honey or Poison? Solving the Trigger Curse in Few-shot Event Detection via Causal Intervention |
In this paper we consider a class of systems of two coupled real scalar fields in bidimensional spacetime, with the main motivation of studying classical or linear stability of soliton solutions. Firstly, we present the class of systems and comment on the topological profile of soliton solutions one can find from the first-order equations that solve the equations of motion. After doing that, we follow the standard approach to classical stability to introduce the main steps one needs to obtain the spectra of Schr\"odinger operators that appear in this class of systems. We consider a specific system, from which we illustrate the general calculations and present some analytical results. We also consider another system, more general, and we present another investigation, that introduces new results and offers a comparison with the former investigations. | Soliton Stability in Systems of Two Real Scalar Fields |
For $U(2)$-invariant 4-metrics, we show that the $B^t$-flat metrics are very different from the other canonical metrics (Bach-flat, Einstein, extremal K\"ahler, etc). We show every $U(2)$-invariant metric is conformal to two separate K\"ahler metrics, leading to ambiK\"ahler structures. Using this observation we find new complete extremal K\"ahler metrics on the total spaces of $\mathcal{O}(-1)$ and $\mathcal{O}(+1)$ that are conformal to the Taub-bolt metric. In addition to its usual hyperK\"ahler structure, the Taub-NUT's conformal class contains two additional complete K\"ahler metrics that make up an ambi-K\"ahler pair, making five independent compatible complex structures for the Taub-NUT, each of which has a conformally K\"ahler (1,1) form. | Canonical metrics and ambiK\"ahler structures on 4-manifolds with $U(2)$ symmetry |
We show that general parity-violating 3d conformal field theories show a double copy structure for momentum space 3-point functions of conserved currents, stress tensor and marginal scalar operators. Splitting up the CFT correlator into two parts - called homogeneous and non-homogeneous - we show that double copy relations exist for each part separately. We arrive at similar conclusions regarding double copy structures using tree-level correlators of massless fields in $dS_4$. We also discuss the flat space limit of these correlators. We further extend the double copy analysis to correlators involving higher-spin conserved currents, which suggests that the spin-$s$ current correlator can be thought of as $s$ copies of the spin one current correlator. | Double copy structure of parity-violating CFT correlators |
We report that the classical phenomena of optical activity and circular dichroism, which are traditionally associated with chirality (helicity) of organic molecules, proteins and inorganic structures, can be observed in non-chiral artificial media. Intriguingly, our metamaterial structure yields exceptionally strong resonant optical activity, which also leads to the appearance of a backward wave, a characteristic sign of negative-index media. | Optical Activity of Planar Achiral Metamaterials |
The scientific career of Uruguayan theoretical physicist Enrique Loedel Palumbo in Argentina illustrates the intense intellectual exchange that existed between these neighboring countries in the first third of the twentieth century. In this paper we briefly discuss the scientific training of this scientist in Uruguay, his subsequent incorporation to the Institute of Physics of La Plata, in Buenos Aires, and his first steps in research under the tutelage of German professor Richard Gans. | Scientific relations between Uruguay and Argentina in the 1920s: the exact sciences |
A three-dimensional $\pm J$ XY spin-glass model is investigated by a nonequilibrium relaxation method. We have introduced a new criterion for the finite-time scaling analysis. A transition temperature is obtained by a crossing point of obtained data. The scaling analysis on the relaxation functions of the spin-glass susceptibility and the chiral-glass susceptibility shows that both transitions occur simultaneously. The result is checked by relaxation functions of the Binder parameters and the glass correlation lengths of the spin and the chirality. Every result is consistent if we consider that the transition is driven by the spin degrees of freedom. | Finite spin-glass transition of the $\pm J$ XY model in three dimensions |
We study energy management policies for the compression and transmission of source data collected by an energy-harvesting sensor node with a finite energy buffer (e.g., rechargeable battery) and a finite data buffer (memory) between source encoder and channel encoder. The sensor node can adapt the source and channel coding rates depending on the observation and channel states. In such a system, the absence of precise information about the amount of energy available in the future is a key challenge. We provide analytical bounds and scaling laws for the average distortion that depend on the size of the energy and data buffers. We furthermore design a resource allocation policy that achieves almost optimal distortion scaling. Our results demonstrate that the energy leakage of state of art energy management policies can be avoided by jointly controlling the source and channel coding rates. | Energy-Neutral Source-Channel Coding with Battery and Memory Size Constraints |
We propose a novel technique to estimate the masses of super massive black holes (SMBHs) residing at the centres of massive galaxies in the nearby Universe using simple photometry. Aperture photometry using SEXTRACTOR is employed to determine the central intensity ratio (CIR) at the optical centre of the galaxy image for a sample of 49 nearby galaxies with SMBH mass estimations. We find that the CIR of ellipticals and classical bulges is strongly correlated with SMBH masses whereas pseudo bulges and ongoing mergers show significant scatter. Also, the CIR of low luminosity AGNs in the sample shows significant connection with the 5 GHz nuclear radio emission suggesting a stronger link between the former and the SMBH evolution in these galaxies. In addition, it is seen that various structural and dynamical properties of the SMBH host galaxies are correlated with the CIR making the latter an important parameter in galaxy evolution studies. Finally, we propose the CIR to be an efficient and simple tool not only to distinguish classical bulges from pseudo bulges but also to estimate the mass of the central SMBH. | Study of central light concentration in nearby galaxies |
Advanced Air Mobility (AAM) introduces a new, efficient mode of transportation with the use of vehicle autonomy and electrified aircraft to provide increasingly autonomous transportation between previously underserved markets. Safe and efficient navigation of low altitude aircraft through highly dense environments requires the integration of a multitude of complex observations, such as surveillance, knowledge of vehicle dynamics, and weather. The processing and reasoning on these observations pose challenges due to the various sources of uncertainty in the information while ensuring cooperation with a variable number of aircraft in the airspace. These challenges coupled with the requirement to make safety-critical decisions in real-time rule out the use of conventional separation assurance techniques. We present a decentralized reinforcement learning framework to provide autonomous self-separation capabilities within AAM corridors with the use of speed and vertical maneuvers. The problem is formulated as a Markov Decision Process and solved by developing a novel extension to the sample-efficient, off-policy soft actor-critic (SAC) algorithm. We introduce the use of attention networks for variable-length observation processing and a distributed computing architecture to achieve high training sample throughput as compared to existing approaches. A comprehensive numerical study shows that the proposed framework can ensure safe and efficient separation of aircraft in high density, dynamic environments with various sources of uncertainty. | Improving Autonomous Separation Assurance through Distributed Reinforcement Learning with Attention Networks |
In recent years, machine learning has achieved impressive results across different application areas. However, machine learning algorithms do not necessarily perform well on a new domain with a different distribution than its training set. Domain Adaptation (DA) is used to mitigate this problem. One approach of existing DA algorithms is to find domain invariant features whose distributions in the source domain are the same as their distribution in the target domain. In this paper, we propose to let the classifier that performs the final classification task on the target domain learn implicitly the invariant features to perform classification. It is achieved via feeding the classifier during training generated fake samples that are similar to samples from both the source and target domains. We call these generated samples domain-agnostic samples. To accomplish this we propose a novel variation of generative adversarial networks (GAN), called the MiddleGAN, that generates fake samples that are similar to samples from both the source and target domains, using two discriminators and one generator. We extend the theory of GAN to show that there exist optimal solutions for the parameters of the two discriminators and one generator in MiddleGAN, and empirically show that the samples generated by the MiddleGAN are similar to both samples from the source domain and samples from the target domain. We conducted extensive evaluations using 24 benchmarks; on the 24 benchmarks, we compare MiddleGAN against various state-of-the-art algorithms and outperform the state-of-the-art by up to 20.1\% on certain benchmarks. | MiddleGAN: Generate Domain Agnostic Samples for Unsupervised Domain Adaptation |
The residual finite-dimensionality of a $\mathrm{C}^*$-algebra is known to be encoded in a topological property of its space of representations, stating that finite-dimensional representations should be dense therein. We extend this paradigm to general (possibly non-self-adjoint) operator algebras. While numerous subtleties emerge in this greater generality, we exhibit novel tools for constructing finite-dimensional approximations. One such tool is a notion of a residually finite-dimensional coaction of a semigroup on an operator algebra, which allows us to construct finite-dimensional approximations for operator algebras of functions and operator algebras of semigroups. Our investigation is intimately related to the question of whether residual finite-dimensionality of an operator algebra is inherited by its maximal $\mathrm{C}^*$-cover, which we resolve in many cases of interest. | Finite-dimensional approximations and semigroup coactions for operator algebras |
With the INTEGRAL observatory, ESA has provided a unique tool to the astronomical community revealing hundreds of sources, new classes of objects, extraordinary views of antimatter annihilation in our Galaxy, and fingerprints of recent nucleosynthesis processes. While INTEGRAL provides the global overview over the soft gamma-ray sky, there is a growing need to perform deeper, more focused investigations of gamma-ray sources. In soft X-rays a comparable step was taken going from the Einstein and the EXOSAT satellites to the Chandra and XMM/Newton observatories. Technological advances in the past years in the domain of gamma-ray focusing using Laue diffraction have paved the way towards a new gamma-ray mission, providing major improvements regarding sensitivity and angular resolution. Such a future Gamma-Ray Imager will allow studies of particle acceleration processes and explosion physics in unprecedented detail, providing essential clues on the innermost nature of the most violent and most energetic processes in the Universe. | GRI: The Gamma-Ray Imager mission |
ADiT is an adaptive approach for processing distributed top-$k$ queries over peer-to-peer networks optimizing both system load and query response time. This approach considers the size of the peer to peer network, the amount $k$ of searched objects, the network capabilities of a connected peer, i.e. the transmission rate, the amount of objects stored on each peer, and the speed of a peer in processing a local top-$k$ query. In extensive experiments with a variety of scenarios we could show that ADiT outperforms state of the art distributed query processing techniques. | Adaptive Distributed Top-k Query Processing |
This document outlines major directions in theoretical support for the measurement of nucleon resonance transition form factors at the JLab 12 GeV upgrade with the CLAS12 detector. Using single and double meson production, prominent resonances in the mass range up to 2 GeV will be studied in the range of photon virtuality $Q^2$ up to 12 GeV$^2$ where quark degrees of freedom are expected to dominate. High level theoretical analysis of these data will open up opportunities to understand how the interactions of dressed quarks create the ground and excited nucleon states and how these interactions emerge from QCD. The paper reviews the current status and the prospects of QCD based model approaches that relate phenomenological information on transition form factors to the non-perturbative strong interaction mechanisms, that are responsible for resonance formation. | Theory Support for the Excited Baryon Program at the Jlab 12 GeV Upgrade |
Relativistic X-ray emission lines from the inner accretion disk around black holes are reviewed. Recent observations with the Chandra X-ray Observatory, X-ray Multi-Mirror Mission-Newton, and Suzaku are revealing these lines to be good probes of strong gravitational effects. A number of important observational and theoretical developments are highlighted, including evidence of black hole spin and effects such as gravitational light bending, the detection of relativistic lines in stellar-mass black holes, and evidence of orbital-timescale line flux variability. In addition, the robustness of the relativistic disk lines against absorption, scattering, and continuum effects is discussed. Finally, prospects for improved measures of black hole spin and understanding the spin history of supermassive black holes in the context of black hole-galaxy co-evolution are presented. The best data and most rigorous results strongly suggest that relativistic X-ray disk lines can drive future explorations of General Relativity and disk physics. | Relativistic X-ray Lines from the Inner Accretion Disks Around Black Holes |
The use of robots in minimally invasive surgery has improved the quality of standard surgical procedures. So far, only the automation of simple surgical actions has been investigated by researchers, while the execution of structured tasks requiring reasoning on the environment and the choice among multiple actions is still managed by human surgeons. In this paper, we propose a framework to implement surgical task automation. The framework consists of a task-level reasoning module based on answer set programming, a low-level motion planning module based on dynamic movement primitives, and a situation awareness module. The logic-based reasoning module generates explainable plans and is able to recover from failure conditions, which are identified and explained by the situation awareness module interfacing to a human supervisor, for enhanced safety. Dynamic Movement Primitives allow to replicate the dexterity of surgeons and to adapt to obstacles and changes in the environment. The framework is validated on different versions of the standard surgical training peg-and-ring task. | Autonomous task planning and situation awareness in robotic surgery |
We experimentally study the electronic spin transport in hBN encapsulated single layer graphene nonlocal spin valves. The use of top and bottom gates allows us to control the carrier density and the electric field independently. The spin relaxation times in our devices range up to 2 ns with spin relaxation lengths exceeding 12 $\mu$m even at room temperature. We obtain that the ratio of the spin relaxation time for spins pointing out-of-plane to spins in-plane is $\tau_{\bot} / \tau_{||} \approx$ 0.75 for zero applied perpendicular electric field. By tuning the electric field this anisotropy changes to $\approx$0.65 at 0.7 V/nm, in agreement with an electric field tunable in-plane Rashba spin-orbit coupling. | Controlling spin relaxation in hexagonal BN-encapsulated graphene with a transverse electric field |
Increasing penetration level of photovoltaic (PV) distributed generation (DG) into distribution networks will have many impacts on nominal circuit operating conditions including voltage quality and reverse power flow issues. In U.S. most studies on PVDG impacts on distribution networks are performed for west coast and central states. The objective of this paper is to study the impacts of PVDG integration on local distribution network based on real-world settings for network parameters and time-series analysis. PVDG penetration level is considered to find the hosting capacity of the network without having major issues in terms of voltage quality and reverse power flow. Time-series analyses show that distributed installation of PVDGs on commercial buses has the maximum network energy loss reduction and larger penetration ratios for them. Additionally, the penetration ratio thresholds for which there will be no power quality and reverse power flow issues and optimal allocation of PVDG and penetration levels are identified for different installation scenarios. | Time-Series Analysis of Photovoltaic Distributed Generation Impacts on a Local Distributed Network |
The order of the chiral phase transition of lattice QCD with unimproved staggered fermions is known to depend on the number of quark flavours, their masses and the lattice spacing. Previous studies in the literature for $N_f \in \{ 3,4 \}$ show first-order transitions, which weaken with decreasing lattice spacing. Here we investigate what happens when lattices are made coarser to establish contact to the strong coupling region. For $N_f \in \{4,8 \}$ we find a drastic weakening of the transition when going from $N_{\tau}=4$ to $N_{\tau}=2$, which is consistent with a second-order chiral transition reported in the literature for $N_f=4$ in the strong coupling limit. This implies a non-monotonic behaviour of the critical quark or pseudo-scalar meson mass, which separates first-order transitions from crossover behaviour, as a function of lattice spacing. | The chiral phase transition from strong to weak coupling |
This report aims at giving a general overview on the classification of the maximal subgroups of compact Lie groups (not necessarily connected). In the first part, it is shown that these fall naturally into three types: (1) those of trivial type, which are simply defined as inverse images of maximal subgroups of the corresponding component group under the canonical projection and whose classification constitutes a problem in finite group theory, (2) those of normal type, whose connected one-component is a normal subgroup, and (3) those of normalizer type, which are the normalizers of their own connected one-component. It is also shown how to reduce the classification of maximal subgroups of the last two types to: (2) the classification of the finite maximal $\Sigma$-invariant subgroups of center-free connected compact simple Lie groups and (3) the classification of the $\Sigma$-primitive subalgebras of compact simple Lie algebras, where $\Sigma$ is a subgroup of the corresponding outer automorphism group. In the second part, we explicitly compute the normalizers of the primitive subalgebras of the compact classical Lie algebras (in the corresponding classical groups), thus arriving at the complete classification of all (non-discrete) maximal subgroups of the compact classical Lie groups. | Maximal Subgroups of Compact Lie Groups |
This paper addresses the four enabling technologies, namely multi-user sparse code multiple access (SCMA), content caching, energy harvesting, and physical layer security for proposing an energy and spectral efficient resource allocation algorithm for the access and backhaul links in heterogeneous cellular networks. Although each of the above mentioned issues could be a topic of research, in a real situation, we would face a complicated scenario where they should be considered jointly, and hence, our target is to consider these technologies jointly in a unified framework. Moreover, we propose two novel content delivery scenarios: 1) single frame content delivery (SFCD), and 2) multiple frames content delivery (MFCD), where the time duration of serving user requests is divided into several frames. In the first scenario, the requested content by each user is served over one frame. However, in the second scenario, the requested content by each user can be delivered over several frames. We formulate the resource allocation for the proposed scenarios as optimization problems where our main aim is to maximize the energy efficiency of access links subject to the transmit power and rate constraints of access and backhaul links, caching and energy harvesting constraints, and SCMA codebook allocation limitations. Due to the practical limitations, we assume that the channel state information values between eavesdroppers and base stations are uncertain and design the network for the worst case scenario. Since the corresponding optimization problems are mixed integer non-linear and nonconvex programming, NP-hard, and intractable, we propose an iterative algorithm based on the well-known alternate and successive convex approximation methods. | Single or Multiple Frames Content Delivery for Next-Generation Networks? |
A supersymmetric generalization of the Peccei-Quinn mechanism is proposed in which two U(1) CP violating phases of the supersymmetric standard model are promoted to dynamical variables. This amounts to postulating the existence of spontaneously broken global symmetries in the supersymmetry breaking sector. The vacuum can then relax near a CP conserving point. As a consequence the strong CP and supersymmetric CP problems may be solved by similar mechanisms. | Dynamical Relaxation of the Supersymmetric CP Violating Phases |
We report a photoluminescence study of high-quality Ge samples at temperatures 12 K $\leq$ T $\leq$ 295 K, over a spectral range that covers phonon-assisted emission from the indirect gap (between the lowest conduction band at the L point of the Brillouin zone and the top of the valence band at the $\Gamma$ point), as well as direct gap emission (from the local minimum of the conduction band at the $\Gamma$ point). The spectra display a rich structure with a rapidly changing lineshape as a function of T. A theory is developed to account for the experimental results using analytical expressions for the contributions from LA, TO, LO, and TA phonons. Coupling of states exactly at the $\Gamma$ and L points is forbidden by symmetry for the latter two phonon modes, but becomes allowed for nearby states and can be accounted for using wave-vector dependent deformation potentials. Excellent agreement is obtained between predicted and observed photoluminescence lineshapes. A decomposition of the predicted signal in terms of the different phonon contributions implies that near room temperature indirect optical absorption and emission are dominated by forbidden processes, and the deformation potentials for allowed processes are smaller than previously assumed. | Temperature-dependent photoluminescence in Ge: experiment and theory |
The nucleon-nucleon (NN) t-matrix is calculated directly as function of two vector momenta for different realistic NN potentials. To facilitate this a formalism is developed for solving the two-nucleon Lippmann-Schwinger equation in momentum space without employing a partial wave decomposition. The total spin is treated in a helicity representation. Two different realistic NN interactions, one defined in momentum space and one in coordinate space, are presented in a form suited for this formulation. The angular and momentum dependence of the full amplitude is studied and displayed. A partial wave decomposition of the full amplitude it carried out to compare the presented results with the well known phase shifts provided by those interactions. | Nucleon-Nucleon Scattering in a Three Dimensional Approach |
Let $G$ be a multigraph and $L\,:\,E(G) \to 2^\mathbb{N}$ be a list assignment on the edges of $G$. Suppose additionally, for every vertex $x$, the edges incident to $x$ have at least $f(x)$ colors in common. We consider a variant of local edge-colorings wherein the color received by an edge $e$ must be contained in $L(e)$. The locality appears in the function $f$, i.e., $f(x)$ is some function of the local structure of $x$ in $G$. Such a notion is a natural generalization of traditional local edge-coloring. Our main results include sufficient conditions on the function $f$ to construct such colorings. As corollaries, we obtain local analogs of Vizing and Shannon's theorems, recovering a recent result of Conley, Greb\'ik and Pikhurko. | Multigraph edge-coloring with local list sizes |
The fine-tuning principles are analyzed in search for predictions of top-quark and Higgs-boson masses. The modification of Veltman condition based on the compensation between fermion and boson vacuum energies within the Standard Model multiplets is proposed. It is supplied with the stability under rescaling and with the requirement of minimum for the physical v. e. v. for the Higgs field (zero anomalous dimension). Their joint solution for top-quark and Higgs-boson couplings exists for the cutoff $\Lambda \approx 2.3 \cdot 10^{13}\,GeV$ that yields the low-energy values $m_t = 151 \pm 4\,GeV; m_H = 195 \pm 7\,GeV$. | Vacuum fine tuning and empirical estimations of masses of the top-quark and Higgs boson |
This note discusses Watson and Holmes (2016) and their pro- posals towards more robust Bayesian decisions. While we acknowledge and commend the authors for setting new and all-encompassing prin- ciples of Bayesian robustness, and we appreciate the strong anchoring of those within a decision-theoretic referential, we remain uncertain as to which extent such principles can be applied outside binary de- cisions. We also wonder at the ultimate relevance of Kullback-Leibler neighbourhoods to characterise robustness and favour extensions along non-parametric axes. | Some comments about James Watson's and Chris Holmes' "Approximate Models and Robust Decisions": Nonparametric Bayesian clay for robust decision bricks |
Product Data Management (PDM) desktop and web based systems maintain the organizational technical and managerial data to increase the quality of products by improving the processes of development, business process flows, change management, product structure management, project tracking and resource planning. Though PDM is heavily benefiting industry but PDM community is facing a very serious unresolved issue in PDM system development with flexible and user friendly graphical user interface for efficient human machine communication. PDM systems offer different services and functionalities at a time but the graphical user interfaces of most of the PDM systems are not designed in a way that a user (especially a new user) can easily learn and use them. Targeting this issue, a thorough research was conducted in field of Human Computer Interaction; resultant data provides the information about graphical user interface development using rich internet applications. The accomplished goal of this research was to support the field of PDM with a proposition of a conceptual model for the implementation of a flexible web based graphical user interface. The proposed conceptual model was successfully designed into implementation model and a resultant prototype putting values to the field is now available. Describing the proposition in detail the main concept, implementation designs and developed prototype is also discussed in this paper. Moreover in the end, prototype is compared with respective functions of existing PDM systems .i.e., Windchill and CIM to evaluate its effectiveness against targeted challenge | Designing Flexible GUI to Increase the Acceptance Rate of Product Data Management Systems in Industry |
Background: In the marine environment, where there are few absolute physical barriers, contemporary contact between previously isolated species can occur across great distances, and in some cases, may be inter-oceanic. [..] in the minke whale species complex [...] migrations [..] have been documented and fertile hybrids and back-crossed individuals between both species have also been identified. However, it is not known whether this represents a contemporary event, potentially driven by ecosystem changes in the Antarctic, or a sporadic occurrence happening over an evolutionary time-scale. We successfully used whole genome resequencing to identify a panel of diagnostic SNPs which now enable us address this evolutionary question. Results: A large number of SNPs displaying fixed or nearly fixed allele frequency differences among the minke whale species were identified from the sequence data. Five panels of putatively diagnostic markers were established on a genotyping platform for validation of allele frequencies; two panels (26 and 24 SNPs) separating the two species of minke whale, and three panels (22, 23, and 24 SNPs) differentiating the three subspecies of common minke whale. The panels were validated against a set of reference samples, demonstrating the ability to accurately identify back-crossed whales up to three generations. Conclusions: This work has resulted in the development of a panel of novel diagnostic genetic markers to address inter-oceanic and global contact among the genetically isolated minke whale species and sub-species. These markers, including a globally relevant genetic reference data set for this species complex, are now openly available for researchers [..]. The approach used here, combining whole genome resequencing and high-throughput genotyping, represents a universal approach to develop similar tools for other species and population complexes. | Whole genome resequencing reveals diagnostic markers for investigating global migration and hybridization between minke whale species |
Let K be a knot in an integral homology 3-sphere and let B denote the 2-fold branched cover of the integral homology sphere branched along K. We construct a map from the slice of characters with trace free along meridians in the SL(2, C)-character variety of the knot exterior to the SL(2, C)-character variety of 2-fold branched cover B. When this map is surjective, it describes the slice as the 2-fold branched cover over the SL(2, C)-character variety of B with branched locus given by the abelian characters, whose preimage is precisely the set of metabelian characters. We show that each of metabelian character can be represented as the character of a binary dihedral representation of the knot group. This map is shown to be surjective for all 2-bridge knots and all pretzel knots of type (p, q, r). An extension of this framework to n-fold branched covers is also described. | On the geometry of the slice of trace--free SL(2,C)-characters of a knot group |
This paper compares well-established Convolutional Neural Networks (CNNs) to recently introduced Vision Transformers for the task of Diabetic Foot Ulcer Classification, in the context of the DFUC 2021 Grand-Challenge, in which this work attained the first position. Comprehensive experiments demonstrate that modern CNNs are still capable of outperforming Transformers in a low-data regime, likely owing to their ability for better exploiting spatial correlations. In addition, we empirically demonstrate that the recent Sharpness-Aware Minimization (SAM) optimization algorithm considerably improves the generalization capability of both kinds of models. Our results demonstrate that for this task, the combination of CNNs and the SAM optimization process results in superior performance than any other of the considered approaches. | Convolutional Nets Versus Vision Transformers for Diabetic Foot Ulcer Classification |
The collective interference of partially distinguishable bosons in multi-mode networks is studied via double-sided Feynman diagrams. The probability for many-body scattering events becomes a multi-dimensional tensor-permanent, which interpolates between distinguishable particles and identical bosons, and easily extends to mixed initial states. The permanent of the distinguishability matrix, composed of all mutual scalar products of the single-particle mode-functions, emerges as a natural measure for the degree of interference: It yields a bound on the difference between event probabilities for partially distinguishable bosons and the idealized species, and exactly quantifies the degree of bosonic bunching. | Sampling of partially distinguishable bosons and the relation to the multidimensional permanent |
Most popular strategies to capture subjective judgments from humans involve the construction of a unidimensional relative measurement scale, representing order preferences or judgments about a set of objects or conditions. This information is generally captured by means of direct scoring, either in the form of a Likert or cardinal scale, or by comparative judgments in pairs or sets. In this sense, the use of pairwise comparisons is becoming increasingly popular because of the simplicity of this experimental procedure. However, this strategy requires non-trivial data analysis to aggregate the comparison ranks into a quality scale and analyse the results, in order to take full advantage of the collected data. This paper explains the process of translating pairwise comparison data into a measurement scale, discusses the benefits and limitations of such scaling methods and introduces a publicly available software in Matlab. We improve on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low. Most of our examples focus on image quality assessment. | A practical guide and software for analysing pairwise comparison experiments |