text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
It is important to clarify where and when rocky and icy planetesimals are formed in a viscously evolving disk. We wish to understand how local runaway pile-up of solids occurs inside or outside the snow line. We assume an icy pebble contains micron-sized silicate grains that are uniformly mixed with ice and are released during the ice sublimation. Using a local one-dimensional code, we solve the radial drift and the turbulent diffusion of solids and the water vapor, taking account of their sublimation/condensation around the snow line. We systematically investigate effects of back-reactions of the solids to gas on the radial drift and diffusion of solids, scale height evolution of the released silicate particles, and possible difference in effective viscous parameters between that for turbulent diffusion ($\alpha_{\rm tur}$) and that for the gas accretion rate onto the central star ($\alpha_{\rm acc}$). We study the dependence on the ratio of the solid mass flux to the gas ($F_{\rm p/g}$). We show that the favorable locations for the pile-up of silicate grains and icy pebbles are the regions in the proximity of the water snow line inside and outside it, respectively. We found that runaway pile-ups occur when both the back-reactions for radial drift and diffusion are included. In the case with only the back-reaction for the radial drift, no runaway pile-up is found except for extremely high pebble flux, while the condition of streaming instability can be satisfied for relatively large $F_{\rm p/g}$ as found in the past literatures. If the back-reactions for radial diffusion is considered, the runaway pile-up occurs for reasonable value of pebble flux. The runaway pile-up of silicate grains that would lead to formation of rocky planetesimals occurs for $\alpha_{\rm tur} \ll \alpha_{\rm acc}$, while the runaway pile-up of icy pebbles is favored for $\alpha_{\rm tur} \sim \alpha_{\rm acc}$. | Formation of rocky and icy planetesimals inside and outside the snow line: Effects of diffusion, sublimation and back-reaction |
We consider representing of natural numbers by arithmetical expressions using ones, addition, multiplication and parentheses. The (integer) complexity of n -- denoted by ||n|| -- is defined as the number of ones in the shortest expressions representing n. We arrive here very soon at the problems that are easy to formulate, but (it seems) extremely hard to solve. In this paper we represent our attempts to explore the field by means of experimental mathematics. Having computed the values of ||n|| up to 10^12 we present our observations. One of them (if true) implies that there is an infinite number of Sophie Germain primes, and even that there is an infinite number of Cunningham chains of length 4 (at least). We prove also some analytical results about integer complexity. | Integer Complexity: Experimental and Analytical Results |
Physics-informed neural networks allow models to be trained by physical laws described by general nonlinear partial differential equations. However, traditional architectures struggle to solve more challenging time-dependent problems due to their architectural nature. In this work, we present a novel physics-informed framework for solving time-dependent partial differential equations. Using only the governing differential equations and problem initial and boundary conditions, we generate a latent representation of the problem's spatio-temporal dynamics. Our model utilizes discrete cosine transforms to encode spatial frequencies and recurrent neural networks to process the time evolution. This efficiently and flexibly produces a compressed representation which is used for additional conditioning of physics-informed models. We show experimental results on the Taylor-Green vortex solution to the Navier-Stokes equations. Our proposed model achieves state-of-the-art performance on the Taylor-Green vortex relative to other physics-informed baseline models. | Physics Informed RNN-DCT Networks for Time-Dependent Partial Differential Equations |
In this paper, we develop a novel distributed algorithm for addressing convex optimization with both nonlinear inequality and linear equality constraints, where the objective function can be a general nonsmooth convex function and all the constraints can be fully coupled. Specifically, we first separate the constraints into three groups, and design two primal-dual methods and utilize a virtual-queue-based method to handle each group of the constraints independently. Then, we integrate these three methods in a strategic way, leading to an integrated primal-dual proximal (IPLUX) algorithm, and enable the distributed implementation of IPLUX. We show that IPLUX achieves an $O(1/k)$ rate of convergence in terms of optimality and feasibility, which is stronger than the convergence results of the state-of-the-art distributed algorithms for convex optimization with coupling nonlinear constraints. Finally, IPLUX exhibits competitive practical performance in the simulations. | Distributed Optimization with Coupling Constraints |
Motivated by recent experiments [Y.J. Lin {\it et al.}, Nature {\bf 471}, 83 (2011)], we study Mott phases and superfluid-insulator (SI) transitions of two-species ultracold bosonic atoms in a two-dimensional square optical lattice with nearest neighbor hopping amplitude $t$ in the presence of a spin-orbit coupling characterized by a tunable strength $\gamma$. Using both strong-coupling expansion and Gutzwiller mean-field theory, we chart out the phase diagrams of the bosons in the presence of such spin-orbit interaction. We compute the momentum distribution of the bosons in the Mott phase near the SI transition point and show that it displays precursor peaks whose position in the Brillouin zone can be varied by tuning $\gamma$. Our analysis of the critical theory of the transition unravels the presence of unconventional quantum critical points at $t/\gamma=0$ which are accompanied by emergence of an additional gapless mode in the critical region. We also study the superfluid phases of the bosons near the SI transition using a Gutzwiller mean-field theory which reveals the existence of a twisted superfluid phase with an anisotropic twist angle which depends on $\gamma$. Finally, we compute the collective modes of the bosons and point out the presence of reentrant SI transitions as a function of $\gamma$ for non-zero $t$. We propose experiments to test our theory. | Superfluid-Insulator transition of two-species bosons with spin-orbit coupling |
We derive a spin-dependent Hamiltonian that captures the symmetry of the zone edge states in silicon. We present analytical expressions of the spin-dependent states and of spin relaxation due to electron-phonon interactions in the multivalley conduction band. We find excellent agreement with experimental results. Similar to the usage of the Kane Hamiltonian in direct band-gap semiconductors, the new Hamiltonian can be used to study spin properties of electrons in silicon. | Spin-Orbit Symmetries of Conduction Electrons in Silicon |
We revisit the localization formulas of cohomology intersection numbers associated to a logarithmic connection. The main contribution of this paper is threefold: we prove the localization formula of the cohomology intersection number of logarithmic forms in terms of residue of a connection; we prove that the leading term of the Laurent expansion of the cohomology intersection number is Grothendieck residue when the connection is hypergeometric; and we prove that the leading term of stringy integral discussed by Arkani-Hamed, He and Lam is nothing but the self-cohomology intersection number of the canonical form. | Localization formulas of cohomology intersection numbers |
A transport study of two-dimensional (2D) holes confined to wide GaAs quantum wells provides a glimpse of a subtle competition between different many-body phases at Landau level filling $\nu=3/2$ in tilted magnetic fields. At large tilt angles ($\theta$), an anisotropic, stripe (or nematic) phase replaces the isotropic compressible Fermi sea at $\nu=3/2$ if the quantum well has a symmetric charge distribution. When the charge distribution is made asymmetric, instead of the stripe phase, an even-denominator fractional quantum state appears at $\nu=3/2$ in a range of large $\theta$, and reverts back to a compressible state at even higher $\theta$. We attribute this remarkable evolution to the significant mixing of the excited and ground-state Landau levels of 2D hole systems in tilted fields. | Morphing of 2D Hole Systems at $\nu=3/2$ in Parallel Magnetic Fields: Compressible, Stripe, and Fractional Quantum Hall Phases |
A "drivebelt" stadium billiard with boundary consisting of circular arcs of differing radius connected by their common tangents shares many properties with the conventional "straight" stadium, including hyperbolicity and mixing, as well as intermittency due to marginally unstable periodic orbits (MUPOs). Interestingly, the roles of the straight and curved sides are reversed. Here we discuss intermittent properties of the chaotic trajectories from the point of view of escape through a hole in the billiard, giving the exact leading order coefficient $\lim_{t\to\infty} t P(t)$ of the survival probability $P(t)$ which is algebraic for fixed hole size. However, in the natural scaling limit of small hole size inversely proportional to time, the decay remains exponential. The big distinction between the straight and drivebelt stadia is that in the drivebelt case there are multiple families of MUPOs leading to qualitatively new effects. A further difference is that most marginal periodic orbits in this system are oblique to the boundary, thus permitting applications that utilise total internal reflection such as microlasers. | Quantifying intermittency in the open drivebelt billiard |
A weakly interacting Bose gas on a simple cubic lattice is considered. We prove the existence of the standard or zero-mode Bose condensation at sufficiently low temperature. This result is valid for sufficiently small interaction potential and small values of chemical potential. Our method exploits infrared bound for the suitable two-point Bogolyubov's inner product. We do not use the reflection positivity or some expansion methods. | Proof of Bose condensation for weakly interacting lattice bosons |
The chromatic polynomial of a graph G counts the number of proper colorings of G. We give an affirmative answer to the conjecture of Read and Rota-Heron-Welsh that the absolute values of the coefficients of the chromatic polynomial form a log-concave sequence. We define a sequence of numerical invariants of projective hypersurfaces analogous to the Milnor number of local analytic hypersurfaces. Then we give a characterization of correspondences between projective spaces up to a positive integer multiple which includes the conjecture on the chromatic polynomial as a special case. As a byproduct of our approach, we obtain an analogue of Kouchnirenko's theorem relating the Milnor number with the Newton polytope. | Milnor numbers of projective hypersurfaces and the chromatic polynomial of graphs |
Few-shot classifiers excel under limited training samples, making them useful in applications with sparsely user-provided labels. Their unique relative prediction setup offers opportunities for novel attacks, such as targeting support sets required to categorise unseen test samples, which are not available in other machine learning setups. In this work, we propose a detection strategy to identify adversarial support sets, aimed at destroying the understanding of a few-shot classifier for a certain class. We achieve this by introducing the concept of self-similarity of a support set and by employing filtering of supports. Our method is attack-agnostic, and we are the first to explore adversarial detection for support sets of few-shot classifiers to the best of our knowledge. Our evaluation of the miniImagenet (MI) and CUB datasets exhibits good attack detection performance despite conceptual simplicity, showing high AUROC scores. We show that self-similarity and filtering for adversarial detection can be paired with other filtering functions, constituting a generalisable concept. | Detection of Adversarial Supports in Few-shot Classifiers Using Self-Similarity and Filtering |
We present a detailed solution of the active interface equations in the inviscid limit. The active interface equations were previously introduced as a toy model of membrane-protein systems: they describe a stochastic interface where growth is stimulated by inclusions which themselves move on the interface. In the inviscid limit, the equations reduce to a pair of coupled conservation laws. After discussing how the inviscid limit is obtained, we turn to the corresponding Riemann problem: the solution of the set of conservation laws with discontinuous initial condition. In particular, by considering two physically meaningful initial conditions, a giant trough and a giant peak in the interface, we elucidate the generation of shock waves and rarefaction fans in the system. Then, by combining several Riemann problems, we construct an oscillating solution of the active interface with periodic boundaries conditions. The existence of this oscillating state reflects the reciprocal coupling between the two conserved quantities in our system. | Inviscid limit of the active interface equations |
The future of computation is the Graphical Processing Unit, i.e. the GPU. The promise that the graphics cards have shown in the field of image processing and accelerated rendering of 3D scenes, and the computational capability that these GPUs possess, they are developing into great parallel computing units. It is quite simple to program a graphics processor to perform general parallel tasks. But after understanding the various architectural aspects of the graphics processor, it can be used to perform other taxing tasks as well. In this paper, we will show how CUDA can fully utilize the tremendous power of these GPUs. CUDA is NVIDIA's parallel computing architecture. It enables dramatic increases in computing performance, by harnessing the power of the GPU. This paper talks about CUDA and its architecture. It takes us through a comparison of CUDA C/C++ with other parallel programming languages like OpenCL and DirectCompute. The paper also lists out the common myths about CUDA and how the future seems to be promising for CUDA. | GPGPU Processing in CUDA Architecture |
The Hamilton-Waterloo Problem HWP$(v;m,n;\alpha,\beta)$ asks for a 2-factorization of the complete graph $K_v$ or $K_v-I$, the complete graph with the edges of a 1-factor removed, into $\alpha$ $C_m$-factors and $\beta$ $C_n$-factors, where $3 \leq m < n$. In the case that $m$ and $n$ are both even, the problem has been solved except possibly when $1 \in \{\alpha,\beta\}$ or when $\alpha$ and $\beta$ are both odd, in which case necessarily $v \equiv 2 \pmod{4}$. In this paper, we develop a new construction that creates factorizations with larger cycles from existing factorizations under certain conditions. This construction enables us to show that there is a solution to HWP$(v;2m,2n;\alpha,\beta)$ for odd $\alpha$ and $\beta$ whenever the obvious necessary conditions hold, except possibly if $\beta=1$; $\beta=3$ and $\gcd(m,n)=1$; $\alpha=1$; or $v=2mn/\gcd(m,n)$. This result almost completely settles the existence problem for even cycles, other than the possible exceptions noted above. | The Hamilton-Waterloo Problem with even cycle lengths |
We adapt the work of Power to describe general, not-necessarily composable, not-necessarily commutative 2-categorical pasting diagrams and their composable and commutative parts. We provide a deformation theory for pasting diagrams valued in $k$-linear categories, paralleling that provided for diagrams of algebras by Gerstenhaber and Schack, proving the standard results. Along the way, the construction gives rise to a bicategorical analog of the homotopy G-algebras of Gerstenhaber and Voronov. | On Deformations of Pasting Diagrams |
Robust automation of the shot peen forming process demands a closed-loop feedback in which a suitable treatment pattern needs to be found in real-time for each treatment iteration. In this work, we present a method for finding the peen-forming patterns, based on a neural network (NN), which learns the nonlinear function that relates a given target shape (input) to its optimal peening pattern (output), from data generated by finite element simulations. The trained NN yields patterns with an average binary accuracy of 98.8\% with respect to the ground truth in microseconds. | Efficient planning of peen-forming patterns via artificial neural networks |
We present three-dimensional nonlinear magnetohydrodynamic simulations of the interiors of fully convective M-dwarfs. Our models consider 0.3 solar-mass stars using the Anelastic Spherical Harmonic code, with the spherical computational domain extending from 0.08-0.96 times the overall stellar radius. Like previous authors, we find that fully convective stars can generate kG-strength magnetic fields (in rough equipartition with the convective flows) without the aid of a tachocline of shear. Although our model stars are everywhere unstably stratified, the amplitudes and typical pattern sizes of the convective flows vary strongly with radius, with the outer regions of the stars hosting vigorous convection and field amplification while the deep interiors are more quiescent. Modest differential rotation is established in hydrodynamic calculations, but -- unlike in some prior work --strongly quenched in MHD simulations because of the Maxwell stresses exerted by the dynamo-generated magnetic fields. Despite the lack of strong differential rotation, the magnetic fields realized in the simulations possess significant mean (axisymmetric) components, which we attribute partly to the strong influence of rotation upon the slowly overturning flows. | Simulations of dynamo action in fully convective stars |
The Stokes parameters of the pulsed synchrotron radiation produced in the striped pulsar wind model are computed and compared with optical observations of the Crab pulsar. We assume the main contribution to the wind emissivity comes from a thin transition layer where the dominant toroidal magnetic field reverses its polarity. The radial component of the field is neglected, but a small meridional component is added. The resulting radiation is linearly polarized (Stokes V=0). In the off-pulse region, the electric vector lies in the direction of the projection on the sky of the rotation axis of the pulsar. This property is unique to the wind model and in good agreement with the data. Other properties such as a reduced degree of polarization and a characteristic sweep of the polarization angle within the pulses are also reproduced. These properties are qualitatively unaffected by variations of the wind Lorentz factor, the electron injection power law index and the inclination of the line of sight. | Polarization of high-energy pulsar radiation in the striped wind model |
We present Atacama Large Millimeter/submillimeter Array observations of a massive (M_stars~10^11 M_Sun) compact (r_e,UV~100 pc) merger remnant at z=0.66 that is driving a 1000 km/s outflow of cool gas, with no observational trace of an active galactic nucleus (AGN). We resolve molecular gas on scales of approximately 1-2 kpc, and our main finding is the discovery of a wing of blueshifted CO(2-1) emission out to -1000 km/s relative to the stars. We argue that this is the molecular component of a multiphase outflow, expelled from the central starburst within the past 5 Myr through stellar feedback, although we cannot rule out previous AGN activity as a launching mechanism. If the latter is true, then this is an example of a relic multiphase AGN outflow. We estimate a molecular mass outflow rate of approximately 300 M_Sun/yr, or about one third of the 10 Myr-averaged star formation rate. This system epitomizes the multiphase 'blowout' episode following a dissipational major merger - a process that has violently quenched central star formation and supermassive black hole growth. | Violent quenching: molecular gas blown to 1000 km/s during a major merger |
Biaxial compression of centimetre-scale graphene, freely standing on the surface of water is studied. Within this platform, we report full stress-strain compression of graphene identifying elastic and plastic deformations. The Young's modulus follows a scaling law and falls two orders of magnitude below the commonly reported values with microscale graphene samples. Such results strongly confirm that graphene - in its very natural form - lacks any intrinsic elastic parameters. Different functionalization/manipulation of graphene lattice affects the mechanics of graphene differently; particularly the effect of the sp3 hybridization and crystalline voids on the yield strength of graphene is explored. Crumpling of graphene is accompanied by gradual generation and transformation of wrinkles which brings about viscoelasticity in graphene, observed for the first time in this paper. Additionally, we report a peculiar correlation between the morphology and the distribution of the strain in graphene lattice. | Biaxial compression of centimeter scale graphene on strictly 2D substrate |
We prove Strichartz estimates in similarity coordinates for the radial wave equation with a self similar potential in dimensions $d\geq 3$. As an application of these, we establish the asymptotic stability of the ODE blowup profile of the energy critical radial nonlinear wave equation for $3\leq d\leq 6$. | Strichartz estimates and Blowup stability for energy critical nonlinear wave equations |
The operator nabla, introduced by Garsia and the author, plays a crucial role in many aspect of the study of diagonal harmonics. Besides giving several new formulas involving this operator, we show how one is lead to representation theoretic explanations for conjectures about the effect of this operator on Schur functions. | New Formulas and Conjectures for the Nabla Operator |
We introduce pymovements: a Python package for analyzing eye-tracking data that follows best practices in software development, including rigorous testing and adherence to coding standards. The package provides functionality for key processes along the entire preprocessing pipeline. This includes parsing of eye tracker data files, transforming positional data into velocity data, detecting gaze events like saccades and fixations, computing event properties like saccade amplitude and fixational dispersion and visualizing data and results with several types of plotting methods. Moreover, pymovements also provides an easily accessible interface for downloading and processing publicly available datasets. Additionally, we emphasize how rigorous testing in scientific software packages is critical to the reproducibility and transparency of research, enabling other researchers to verify and build upon previous findings. | pymovements: A Python Package for Eye Movement Data Processing |
In this chapter, we give an introduction to symbolic artificial intelligence (AI) and discuss its relation and application to multimedia. We begin by defining what symbolic AI is, what distinguishes it from non-symbolic approaches, such as machine learning, and how it can used in the construction of advanced multimedia applications. We then introduce description logic (DL) and use it to discuss symbolic representation and reasoning. DL is the logical underpinning of OWL, the most successful family of ontology languages. After discussing DL, we present OWL and related Semantic Web technologies, such as RDF and SPARQL. We conclude the chapter by discussing a hybrid model for multimedia representation, called Hyperknowledge. Throughout the text, we make references to technologies and extensions specifically designed to solve the kinds of problems that arise in multimedia representation. | An Introduction to Symbolic Artificial Intelligence Applied to Multimedia |
In natural language processing, a recently popular line of work explores how to best report the experimental results of neural networks. One exemplar publication, titled "Show Your Work: Improved Reporting of Experimental Results," advocates for reporting the expected validation effectiveness of the best-tuned model, with respect to the computational budget. In the present work, we critically examine this paper. As far as statistical generalizability is concerned, we find unspoken pitfalls and caveats with this approach. We analytically show that their estimator is biased and uses error-prone assumptions. We find that the estimator favors negative errors and yields poor bootstrapped confidence intervals. We derive an unbiased alternative and bolster our claims with empirical evidence from statistical simulation. Our codebase is at http://github.com/castorini/meanmax. | Showing Your Work Doesn't Always Work |
We establish some new bounds on the log-covering numbers of (anisotropic) Gaussian reproducing kernel Hilbert spaces. Unlike previous results in this direction we focus on small explicit constants and their dependency on crucial parameters such as the kernel bandwidth and the size and dimension of the underlying space. | A Closer Look at Covering Number Bounds for Gaussian Kernels |
We present simultaneous multi-color optical photometry using ULTRACAM of the transiting exoplanet KIC 12557548 b (also known as KIC 1255 b). This reveals, for the first time, the color dependence of the transit depth. Our g and z transits are similar in shape to the average Kepler short-cadence profile, and constitute the highest-quality extant coverage of individual transits. Our Night 1 transit depths are 0.85 +/- 0.04% in z; 1.00 +/- 0.03% in g; and 1.1 +/- 0.3% in u. We employ a residual-permutation method to assess the impact of correlated noise on the depth difference between the z and g bands and calculate the significance of the color dependence at 3.2{\sigma}. The Night 1 depths are consistent with dust extinction as observed in the ISM, but require grain sizes comparable to the largest found in the ISM: 0.25-1{\mu}m. This provides direct evidence in favor of this object being a disrupting low-mass rocky planet, feeding a transiting dust cloud. On the remaining four nights of observations the object was in a rare shallow-transit phase. If the grain size in the transiting dust cloud changes as the transit depth changes, the extinction efficiency is expected to change in a wavelength- and composition-dependent way. Observing a change in the wavelength-dependent transit depth would offer an unprecedented opportunity to determine the composition of the disintegrating rocky body KIC 12557548 b. We detected four out-of-transit u band events consistent with stellar flares. | Direct evidence for an evolving dust cloud from the exoplanet KIC 12557548 b |
In this paper we present a novel unsupervised representation learning approach for 3D shapes, which is an important research challenge as it avoids the manual effort required for collecting supervised data. Our method trains an RNN-based neural network architecture to solve multiple view inter-prediction tasks for each shape. Given several nearby views of a shape, we define view inter-prediction as the task of predicting the center view between the input views, and reconstructing the input views in a low-level feature space. The key idea of our approach is to implement the shape representation as a shape-specific global memory that is shared between all local view inter-predictions for each shape. Intuitively, this memory enables the system to aggregate information that is useful to better solve the view inter-prediction tasks for each shape, and to leverage the memory as a view-independent shape representation. Our approach obtains the best results using a combination of L_2 and adversarial losses for the view inter-prediction task. We show that VIP-GAN outperforms state-of-the-art methods in unsupervised 3D feature learning on three large scale 3D shape benchmarks. | View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions |
Using impact-factor representation, we consider the lepton pair production by an incident high-energy photon in a strong electromagnetic field of a nucleus. By summing leading terms of perturbation series, we obtain a simple formula for the amplitude, valid to all orders in ${\cal O}(\alpha Z)$ and arbitrary field of the nucleus. Using these results, we derive, in a simple manner, the results for the lepton pair production by a virtual incident photon in a Coulomb field. For real incident photon our results coincide with the known ones. Also, a particular example of a non-Coulomb potential is discussed in some detail. | Lepton pair production by a high energy photon in a strong electromagnetic field |
This paper first establishes an approximate scaling property of the potential-energy function of a classical liquid with good isomorphs (a Roskilde-simple liquid). This "pseudohomogeneous" property makes explicit that - and in which sense - such a system has a hidden scale invariance. The second part gives a potential-energy formulation of the quasiuniversality of monatomic Roskilde-simple liquids, which was recently rationalized in terms of the existence of a quasiuniversal single-parameter family of reduced-coordinate constant-potential-energy hypersurfaces [J. C. Dyre, Phys. Rev. E 87, 022106 (2013)]. The new formulation involves a quasiuniversal reduced-coordinate potential-energy function. A few consequences of this are discussed. | Isomorphs, hidden scale invariance, and quasiuniversality |
We present two verification protocols where the correctness of a "target" computation is checked by means of "trap" computations that can be efficiently simulated on a classical computer. Our protocols rely on a minimal set of noise-free operations (preparation of eight single-qubit states or measurement of four observables, both on a single plane of the Bloch sphere) and achieve linear overhead. To the best of our knowledge, our protocols are the least demanding techniques able to achieve linear overhead. They represent a step towards further reducing the quantum requirements for verification. | Reducing resources for verification of quantum computations |
To date, most analysis of WLANs has been focused on their operation under saturation condition. This work is an attempt to understand the fundamental performance of WLANs under unsaturated condition. In particular, we are interested in the delay performance when collisions of packets are resolved by an exponential backoff mechanism. Using a multiple-vacation queueing model, we derive an explicit expression for packet delay distribution, from which necessary conditions for finite mean delay and delay jitter are established. It is found that under some circumstances, mean delay and delay jitter may approach infinity even when the traffic load is way below the saturation throughput. Saturation throughput is therefore not a sound measure of WLAN capacity when the underlying applications are delay sensitive. To bridge the gap, we define safe-bounded-mean-delay (SBMD) throughput and safe-bounded-delay-jitter (SBDJ) throughput that reflect the actual network capacity users can enjoy when they require bounded mean delay and delay jitter, respectively. The analytical model in this paper is general enough to cover both single-packet reception (SPR) and multi-packet reception (MPR) WLANs, as well as carrier-sensing and non-carrier-sensing networks. We show that the SBMD and SBDJ throughputs scale super-linearly with the MPR capability of a network. Together with our earlier work that proves super-linear throughput scaling under saturation condition, our results here complete the demonstration of MPR as a powerful capacity-enhancement technique for both delay-sensitive and delay-tolerant applications. | Delay Analysis for Wireless Local Area Networks with Multipacket Reception under Finite Load |
In this paper, we argue that the unsatisfactory out-of-distribution (OOD) detection performance of neural networks is mainly due to the SoftMax loss anisotropy and propensity to produce low entropy probability distributions in disagreement with the principle of maximum entropy. Current out-of-distribution (OOD) detection approaches usually do not directly fix the SoftMax loss drawbacks, but rather build techniques to circumvent it. Unfortunately, those methods usually produce undesired side effects (e.g., classification accuracy drop, additional hyperparameters, slower inferences, and collecting extra data). In the opposite direction, we propose replacing SoftMax loss with a novel loss function that does not suffer from the mentioned weaknesses. The proposed IsoMax loss is isotropic (exclusively distance-based) and provides high entropy posterior probability distributions. Replacing the SoftMax loss by IsoMax loss requires no model or training changes. Additionally, the models trained with IsoMax loss produce as fast and energy-efficient inferences as those trained using SoftMax loss. Moreover, no classification accuracy drop is observed. The proposed method does not rely on outlier/background data, hyperparameter tuning, temperature calibration, feature extraction, metric learning, adversarial training, ensemble procedures, or generative models. Our experiments showed that IsoMax loss works as a seamless SoftMax loss drop-in replacement that significantly improves neural networks' OOD detection performance. Hence, it may be used as a baseline OOD detection approach to be combined with current or future OOD detection techniques to achieve even higher results. | Entropic Out-of-Distribution Detection: Seamless Detection of Unknown Examples |
The coalescence of two compact objects is a key target for the new gravitational wave observatories such as Advanced-Virgo (AdV), Advanced-LIGO (aLIGO) and KAGRA. This phenomenon can lead to the simultaneous detection of electromagnetic waves in the form of short GRBs (sGRBs) and gravitational wave transients. This will potentially allow for the first time access to the fireball and the central engine properties. We present an estimation of the detection rate of such events, seen both by a Swift-like satellite and AdV/ALIGO. This rate is derived only from the observations of sGRBs. We show that this rate, if not very high, predicts a few triggers during the whole life time of Advanced LIGO-Virgo. We discuss how to increase it using some dedicated observational strategies. We apply our results to other missions such as the SVOM French-Chinese satellite project or LOFT. | Simultaneous detection rates of binary neutron star systems in advanced Virgo/LIGO and GRB detectors |
Let $T\subset{\mathbb R}^n$ be a fixed set. By a scaled copy of $T$ around $x\in{\mathbb R}^n$ we mean a set of the form $x+rT$ for some $r>0$. In this survey paper we study results about the following type of problems: How small can a set be if it contains a scaled copy of $T$ around every point of a set of given size? We will consider the cases when $T$ is circle or sphere centered at the origin, Cantor set in ${\mathbb R}$, the boundary of a square centered at the origin, or more generally the $k$-skeleton ($0\le k<n$) of an $n$-dimensional cube centered at the origin or the $k$-skeleton of a more general polytope of ${\mathbb R}^n$. We also study the case when we allow not only scaled copies but also scaled and rotated copies and also the case when we allow only rotated copies. | Small union with large set of centers |
In this paper, we study the distribution of the so-called "Yule's nonsense correlation statistic" on a time interval $[0,T]$ for a time horizon $T>0$ , when $T$ is large, for a pair $(X_{1},X_{2})$ of independent Ornstein-Uhlenbeck processes. This statistic is by definition equal to : \begin{equation*} \rho (T):=\frac{Y_{12}(T)}{\sqrt{Y_{11}(T)}\sqrt{Y_{22}(T)}}, \end{equation*} where the random variables $Y_{ij}(T)$, $i,j=1,2$ are defined as \begin{equation*} Y_{ij}(T):=\int_{0}^{T}X_{i}(u)X_{j}(u)du-T\bar{X}_{i}\bar{X_{j}}, \bar{X}_{i}:=\frac{1}{T}\int_{0}^{T}X_{i}(u)du. \end{equation*} We assume $X_{1}$ and $X_{2}$ have the same drift parameter $\theta >0$. We also study the asymptotic law of a discrete-type version of $\rho (T)$, where $Y_{ij}(T)$ above are replaced by their Riemann-sum discretizations. In this case, conditions are provided for how the discretization (in-fill) step relates to the long horizon $T$. We establish identical normal asymptotics for standardized $\rho (T)$ and its discrete-data version. The asymptotic variance of $\rho (T)T^{1/2}$ is $\theta ^{-1}$. We also establish speeds of convergence in the Kolmogorov distance, which are of Berry-Ess\'een-type (constant*$T^{-1/2}$) except for a $\ln T$ factor. Our method is to use the properties of Wiener-chaos variables, since $\rho (T)$ and its discrete version are comprised of ratios involving three such variables in the 2nd Wiener chaos. This methodology accesses the Kolmogorov distance thanks to a relation which stems from the connection between the Malliavin calculus and Stein's method on Wiener space. | Asymptotics of Yule's nonsense correlation for Ornstein-Uhlenbeck paths: a Wiener chaos approach |
In the framework of the stochastic projected Gross-Pitaevskii equation we investigate finite-temperature dynamics of a bosonic Josephson junction (BJJ) formed by a Bose-Einstein condensate of atoms in a two-well trapping potential. We extract the characteristic properties of the BJJ from the stationary finite-temperature solutions and compare the dynamics of the system with the resistively shunted Josephson model. Analyzing the decay dynamics of the relative population imbalance we estimate the effective normal conductance of the junction induced by thermal atoms. The calculated normal conductance at various temperatures is then compared with predictions of the noise-less model and the model of ballistic transport of thermal atoms. | Finite-temperature dynamics of a bosonic Josephson junction |
Control problems not admitting the dynamic programming principle are known as time-inconsistent. The game-theoretic approach is to interpret such problems as intrapersonal dynamic games and look for subgame perfect Nash equilibria. A fundamental result of time-inconsistent stochastic control is a verification theorem saying that solving the extended HJB system is a sufficient condition for equilibrium. We show that solving the extended HJB system is a necessary condition for equilibrium, under regularity assumptions. The controlled process is a general It\^o diffusion. | A regular equilibrium solves the extended HJB system |
The SARS-CoV-2 outbreak changed the everyday life of almost all the people over the world.Currently, we are facing with the problem of containing the spread of the virus both using the more effective forced lockdown, which has the drawback of slowing down the economy of the involved countries, and by identifying and isolating the positive individuals, which, instead, is an hard task in general due to the lack of information. For this specific disease, the identificato of the infected is particularly challenging since there exists cathegories, namely the asymptomatics, who are positive and potentially contagious, but do not show any of the symptoms of SARS-CoV-2. Until the developement and distribution of a vaccine is not yet ready, we need to design ways of selecting those individuals which are most likely infected, given the limited amount of tests which are available each day. In this paper, we make use of available data collected by the so called contact tracing apps to develop an algorithm, namely PPTO, that identifies those individuals that are most likely positive and, therefore, should be tested. While in the past these analysis have been conducted by centralized algorithms, requiring that all the app users data are gathered in a single database, our protocol is able to work on a device level, by exploiting the communication of anonymized information to other devices. | A privacy-preserving tests optimization algorithm for epidemics containment |
Temperature affects both the timing and outcome of animal development, but the detailed effects of temperature on the progress of early development have been poorly characterized. To determine the impact of temperature on the order and timing of events during Drosophila melanogaster embryogenesis, we used time-lapse imaging to track the progress of embryos from shortly after egg laying through hatching at seven precisely maintained temperatures between 17.5C and 32.5C. We employed a combination of automated and manual annotation to determine when 36 milestones occurred in each embryo. D. melanogaster embryogenesis takes ~33 hours at 17.5C, and accelerates with increasing temperature to 16 hours at 27.5C, above which embryogenesis slows slightly. Remarkably, while the total time of embryogenesis varies over two fold, the relative timing of events from cellularization through hatching is constant across temperatures. To further explore the relationship between temperature and embryogenesis, we expanded our analysis to cover ten additional Drosophila species of varying climatic origins. Six of these species, like D. melanogaster, are of tropical origin, and embryogenesis time at different temperatures was similar for them all. D. mojavensis, a sub-tropical fly, develops slower than the tropical species at lower temperatures, while D. virilis, a temperate fly, exhibits slower development at all temperatures. The alpine sister species D. persimilis and D. pseudoobscura develop as rapidly as tropical flies at cooler temperatures, but exhibit diminished acceleration above 22.5C and have drastically slowed development by 30C. Despite ranging from 13 hours for D. erecta at 30C to 46 hours for D. virilis at 17.5C, the relative timing of events from cellularization through hatching is constant across all species and temperatures, suggesting the existence of a timer controlling embryogenesis. | Drosophila embryogenesis scales uniformly across temperature in developmentally diverse species |
We describe some of the first polarized neutron scattering measurements performed at HYSPEC spectrometer at the Spallation Neutron Source, Oak Ridge National Laboratory. We discuss details of the instrument setup and the experimental procedures in the mode with full polarization analysis. Examples of polarized neutron diffraction and polarized inelastic neutron data obtained on single crystal samples are presented. | Polarized neutron scattering on HYSPEC: the HYbrid SPECtrometer at SNS |
The cold ternary fission of $^{238}$Pu, $^{240}$Pu, $^{242}$Pu and $^{244}$Pu isotopes, with $^{4}$He as light charged particle, in equatorial and collinear configuration has been studied within the Unified ternary fission model (UTFM). The fragment combination $^{100}$Zr+$^{4}$He+$^{134}$Te possessing the near doubly magic nuclei $^{134}$Te (N=82, Z=52) gives the highest yield in the alpha accompanied ternary fission of $^{238}$Pu. For the alpha accompanied ternary fission of $^{240}$Pu, $^{242}$Pu and $^{244}$Pu isotopes, the highest yield was found for the fragment combination with doubly magic nuclei $^{132}$Sn (N=82, Z=50) as the heavier fragment. The deformation and orientation of fragments have also been taken into account for the alpha accompanied ternary fission of $^{238-244}$Pu isotopes, and it has been found that in addition to closed shell effect, ground state deformation also plays an important role in determining the isotopic yield in the ternary fission process. The emission probability and kinetic energy of long range alpha particles have been calculated and are found to be in good agreement with the experimental data. | {\alpha}-accompanied cold ternary fission of $^{238-244}$Pu isotopes in equatorial and collinear configuration |
The Visible Integral Field Replicable Unit Spectrograph (VIRUS) is an array of at least 150 copies of a simple, fiber-fed integral field spectrograph that will be deployed on the Hobby-Eberly Telescope (HET) to carry out the HET Dark Energy Experiment (HETDEX). Each spectrograph contains a volume phase holographic grating as its dispersing element that is used in first order for 350 nm to 550 nm. We discuss the test methods used to evaluate the performance of the prototype gratings, which have aided in modifying the fabrication prescription for achieving the specified batch diffraction efficiency required for HETDEX. In particular, we discuss tests in which we measure the diffraction efficiency at the nominal grating angle of incidence in VIRUS for all orders accessible to our test bench that are allowed by the grating equation. For select gratings, these tests have allowed us to account for > 90% of the incident light for wavelengths within the spectral coverage of VIRUS. The remaining light that is unaccounted for is likely being diffracted into reflective orders or being absorbed or scattered within the grating layer (for bluer wavelengths especially, the latter term may dominate the others). Finally, we discuss an apparatus that will be used to quickly verify the first order diffraction efficiency specification for the batch of at least 150 VIRUS production gratings. | Methods for evaluating the performance of volume phase holographic gratings for the VIRUS spectrograph array |
We study the interaction of small hydrophobic particles on the surface of an ultra-soft elastic gel, in which a small amount of elasticity of the medium balances the weights of the particles. The excess energy of the surface of the deformed gel causes them to attract as is the case with the generic capillary interactions of particles on a liquid surface. The variation of the gravitational potential energies of the particles resulting from their descents in the gel coupled with the superposition principle of Nicolson allow a fair estimation of the distance dependent attractive energy of the particles. This energy follows a modified Bessel function of the second kind with a characteristic elastocapillary decay length that decreases with the elasticity of the medium. An interesting finding of this study is that the particles on the gel move towards each other as if the system possesses a negative diffusivity that is inversely proportional to friction. This study illustrates how the capillary interaction of particles is modified by the elasticity of the medium, which is expected to have important implications in the surface force driven self-assembly of particles. In particular, this study points out that the range and the strength of the capillary interaction can be tuned in by appropriate choices of the elasticity of the support and the interfacial tension of the surrounding medium. Manipulation of the particle interactions is exemplified in such fascinating mimicry of the biological processes as the tubulation, phagocytic engulfment and in the assembly of particles that can be used to study nucleation and clustering phenomena in well controlled settings. | Elasto-capillary interaction of particles on the surfaces of ultra-soft gels: a novel route to study self-assembly and soft lubrication |
This paper presents the second data release (DR2) of the Beijing-Arizona Sky Survey (BASS). BASS is an imaging survey of about 5400 deg$^2$ in $g$ and $r$ bands using the 2.3 m Bok telescope. DR2 includes the observations as of July 2017 obtained by BASS and Mayall $z$-band Legacy Survey (MzLS). This is our first time to include the MzLS data covering the same area as BASS. BASS and MzLS have respectively completed about 72% and 76% of their observations. The two surveys will be served for the spectroscopic targeting of the upcoming Dark Energy Spectroscopic Instrument. Both BASS and MzLS data are reduced by the same pipeline. We have updated the basic data reduction and photometric methods in DR2. In particular, source detections are performed on stacked images, and photometric measurements are co-added from single-epoch images based on these sources. The median 5$\sigma$ depths with corrections of the Galactic extinction are 24.05, 23.61, and 23.10 mag for $g$, $r$, and $z$ bands, respectively. The DR2 data products include stacked images, co-added catalogs, and single-epoch images and catalogs. The BASS website (http://batc.bao.ac.cn/BASS/) provides detailed information and links to download the data. | The Second Data Release of the Beijing-Arizona Sky Survey |
Character-level convolutional neural networks (char-CNN) require no knowledge of the semantic or syntactic structure of the language they classify. This property simplifies its implementation but reduces its classification accuracy. Increasing the depth of char-CNN architectures does not result in breakthrough accuracy improvements. Research has not established which char-CNN architectures are optimal for text classification tasks. Manually designing and training char-CNNs is an iterative and time-consuming process that requires expert domain knowledge. Evolutionary deep learning (EDL) techniques, including surrogate-based versions, have demonstrated success in automatically searching for performant CNN architectures for image analysis tasks. Researchers have not applied EDL techniques to search the architecture space of char-CNNs for text classification tasks. This article demonstrates the first work in evolving char-CNN architectures using a novel EDL algorithm based on genetic programming, an indirect encoding and surrogate models, to search for performant char-CNN architectures automatically. The algorithm is evaluated on eight text classification datasets and benchmarked against five manually designed CNN architecture and one long short-term memory (LSTM) architecture. Experiment results indicate that the algorithm can evolve architectures that outperform the LSTM in terms of classification accuracy and five of the manually designed CNN architectures in terms of classification accuracy and parameter count. | Evolving Character-level Convolutional Neural Networks for Text Classification |
This paper investigates a wireless-powered two-way relay network (WP-TWRN), in which two sources exchange information with the aid of one amplify-and-forward (AF) relay. Contrary to the conventional two-way relay networks, we consider the scenario that the AF relay has no embedded energy supply, and it is equipped with an energy harvesting unit and rechargeable battery. As such, it can accumulate the energy harvested from both sources' signals before helping forwarding their information. In this paper, we develop a power splitting-based energy accumulation (PS-EA) scheme for the considered WP-TWRN. To determine whether the relay has accumulated sufficient energy, we set a predefined energy threshold for the relay. When the accumulated energy reaches the threshold, relay splits the received signal power into two parts, one for energy harvesting and the other for information forwarding. If the stored energy at the relay is below the threshold, all the received signal power will be accumulated at the relay's battery. By modeling the finite-capacity battery of relay as a finite-state Markov Chain (MC), we derive a closed-form expression for the system throughput of the proposed PS-EA scheme over Nakagami-m fading channels. Numerical results validate our theoretical analysis and show that the proposed PS-EA scheme outperforms the conventional time switching-based energy accumulation (TS-EA) scheme and the existing power splitting schemes without energy accumulation. | Wireless-Powered Two-Way Relaying with Power Splitting-based Energy Accumulation |
The essences of the weak CP violation, the quark and lepton Jarlskog invariants, are determined toward future model buildings beyond the Standard Model (SM). The equivalence of two calculations of Jarlskog invariants gives a bound on the CP phase in some parametrization. Satisfying the unitarity condition, we obtain the CKM and MNS matrices from the experimental data, and present the results in matrix forms. The Jarlskog determinant $J^q$ in the quark sector is found to be $\sim 3.11\times 10^{-5}|\sin\delks|$ while $J^\ell$ in the leptonic sector is $\sim 2.96\times 10^{-2}|\sin\delksl|$ in the normal hierarchy parametrization. | Jarlskog determinant and data on flavor matrices |
Let $M$ be a closed orientable irreducible $3$-manifold such that $\pi_1(M)$ is left orderable. (a) Let $M_0 = M - Int(B^{3})$, where $B^{3}$ is a compact $3$-ball in $M$. We have a process to produce a co-orientable Reebless foliation $\mathcal{F}$ in $M_0$ such that: (1) $\mathcal{F}$ has a transverse $(\pi_1(M),\mathbb{R})$ structure, (2) there exists a simple closed curve in $M$ that is co-orientably transverse to $\mathcal{F}$ and intersects every leaf of $\mathcal{F}$. More specifically, given a pair $(<,\Gamma)$ composed of a left-invariant order "$<$" of $\pi_1(M)$ and a fundamental domain $\Gamma$ of $M$ in its universal cover with certain property (which always exists), we can produce a resulting foliation in $M - Int(B^{3})$ as above, and we can test if it can extend to a taut foliation of $M$. (b) Suppose further that $M$ is either atoroidal or a rational homology $3$-sphere. If $M$ admits an $\mathbb{R}$-covered foliation $\mathcal{F}_0$, then there is a resulting foliation $\mathcal{F}$ of our process in $M - Int(B^{3})$ such that: $\mathcal{F}$ can extend to an $\mathbb{R}$-covered foliation $\mathcal{F}_{extend}$ of $M$, and $\mathcal{F}_0$ can be recovered from doing a collapsing operation on $\mathcal{F}_{extend}$. Here, by a collapsing operation on $\mathcal{F}_{extend}$, we mean the following process: (1) choosing an embedded product space $S \times I$ in $M$ for some (possibly non-compact) surface $S$ such that $S \times \{0\}, S \times \{1\}$ are leaves of $\mathcal{F}_{extend}$ (notice that $\mathcal{F}_{extend} \mid_{S \times I}$ may not be a product bundle), (2) replacing $\mathcal{F}_{extend} \mid_{S \times I}$ by a single leaf $S$. (c) We conjecture that there always exists a resulting foliation of our process in $M - Int(B^{3})$ which can extend to a taut foliation in $M$. | Left orderability, foliations, and transverse $(\pi_1,\mathbb{R})$ structures for $3$-manifolds with sphere boundary |
We use free probability techniques for computing spectra and Brown measures of some non hermitian operators in finite von Neumann algebras. Examples include u_n+u_oo where u_n and u_oo are the generators of Z_n and Z respectively, in the free product group Z_n*Z, or elliptic elements, of the form S_a+iS_b where S_a and S_b are free semi-circular elements of variance a and b. We give some pictorial evidence for connections with spectra of random matrices. | Computation of some examples of Brown's spectral measure in free probability |
Zernike polynomials are widely used to describe the wavefront phase as they are well suited to the circular geometry of various optical apertures. Non-conventional optical systems, such as future large optical telescopes with highly segmented primary mirrors or advanced wavefront control devices using segmented mirror membrane facesheets, use approximate numerical methods to reproduce a set of Zernike or hexagonal modes with the limited degree of freedom offered by hexagonal segments. In this paper, we present a novel approach for a rigorous Zernike and hexagonal modes decomposition adapted to hexagonal segmented pupils by means of analytical calculations. By contrast to numerical approaches that are dependent on the sampling of the segment, the decomposition expressed analytically only relies on the number and positions of segments comprising the pupil. Our analytical method allows extremely quick results minimizing computational and memory costs. Further, the proposed formulae can be applied independently from the geometrical architecture of segmented optical apertures. Consequently, the method is universal and versatile per se. This work has many potential applications in particular for modern astronomy with extremely large telescopes. | Analytical decomposition of Zernike and hexagonal modes over an hexagonal segmented optical aperture |
We investigate the impact of asymmetric neutrino-emissions on explosive nucleosynthesis in core-collapse supernovae (CCSNe) of progenitors with a mass range of 9.5 to 25$M_{\odot}$. We perform axisymmetric, hydrodynamic simulations of the CCSN explosion with a simplified neutrino-transport, in which anti-correlated dipolar emissions of $\nu_{\rm e}$ and ${\bar \nu}_{\rm e}$ are imposed. We then evaluate abundances and masses of the CCSN ejecta in a post-processing manner. We find that the asymmetric $\nu$-emission leads to the abundant ejection of $p$- and $n$-rich matter in the high-$\nu_{\rm e}$ and -${\bar \nu}_{\rm e}$ hemispheres, respectively. It substantially affects the abundances of the ejecta for elements heavier than Ni regardless of progenitors, although those elements lighter than Ca are less sensitive. Based on these results, we calculate the IMF-averaged abundances of the CCSN ejecta with taking into account the contribution from Type Ia SNe. For $m_{\rm asy} = 10/3\%$ and $10\%$, where $m_{\rm asy}$ denotes the asymmetric degree of the dipole components in the neutrino emissions, the averaged abundances for elements lighter than Y are comparable to those of the solar abundances, whereas those of elements heavier than Ge are overproduced in the case with $m_{\rm asy} \ge 30\%$. Our result also suggests that the effect of the asymmetric neutrino emissions is imprinted in the difference of abundance ratio of [Ni/Fe] and [Zn/Fe] between the high-$\nu_{\rm e}$ and -${\bar \nu}_{\rm e}$ hemispheres, indicating that the future spectroscopic X-ray observations of a CCSN remnant will bring evidence of the asymmetric neutrino emissions if exist. | The impact of asymmetric neutrino emissions on nucleosynthesis in core-collapse supernovae II -- progenitor dependences -- |
Type Ia Supernova Hubble residuals have been shown to correlate with host galaxy mass, imposing a major obstacle for their use in measuring dark energy properties. Here, we calibrate the fundamental metallicity relation (FMR) of Mannucci et al. (2010) for host mass and star formation rates measured from broad-band colors alone. We apply the FMR to the large number of hosts from the SDSS-II sample of Gupta et al. (2011) and find that the scatter in the Hubble residuals is significantly reduced when compared with using only stellar mass (or the mass-metallicity relation) as a fit parameter. Our calibration of the FMR is restricted to only star-forming galaxies and in the Hubble residual calculation we include only hosts with log(SFR) > -2. Our results strongly suggest that metallicity is the underlying source of the correlation between Hubble residuals and host galaxy mass. Since the FMR is nearly constant between z = 2 and the present, use of the FMR along with light curve width and color should provide a robust distance measurement method that minimizes systematic errors. | The Fundamental Metallicity Relation Reduces Type Ia SN Hubble Residuals More Than Host Mass Alone |
Objective. We consider the cross-subject decoding problem from local field potential (LFP) signals, where training data collected from the prefrontal cortex (PFC) of a source subject is used to decode intended motor actions in a destination subject. Approach. We propose a novel supervised transfer learning technique, referred to as data centering, which is used to adapt the feature space of the source to the feature space of the destination. The key ingredients of data centering are the transfer functions used to model the deterministic component of the relationship between the source and destination feature spaces. We propose an efficient data-driven estimation approach for linear transfer functions that uses the first and second order moments of the class-conditional distributions. Main result. We apply our data centering technique with linear transfer functions for cross-subject decoding of eye movement intentions in an experiment where two macaque monkeys perform memory-guided visual saccades to one of eight target locations. The results show peak cross-subject decoding performance of $80\%$, which marks a substantial improvement over random choice decoder. In addition to this, data centering also outperforms standard sampling-based methods in setups with imbalanced training data. Significance. The analyses presented herein demonstrate that the proposed data centering is a viable novel technique for reliable LFP-based cross-subject brain-computer interfacing and neural prostheses. | Cross-subject Decoding of Eye Movement Goals from Local Field Potentials |
Convolutional neural networks (CNNs) demand huge DRAM bandwidth for computational imaging tasks, and block-based processing has recently been applied to greatly reduce the bandwidth. However, the induced additional computation for feature recomputing or the large SRAM for feature reusing will degrade the performance or even forbid the usage of state-of-the-art models. In this paper, we address these issues by considering the overheads and hardware constraints in advance when constructing CNNs. We investigate a novel model family---ERNet---which includes temporary layer expansion as another means for increasing model capacity. We analyze three ERNet variants in terms of hardware requirement and introduce a hardware-aware model optimization procedure. Evaluations on Full HD and 4K UHD applications will be given to show the effectiveness in terms of image quality, pixel throughput, and SRAM usage. The results also show that, for block-based inference, ERNet can outperform the state-of-the-art FFDNet and EDSR-baseline models for image denoising and super-resolution respectively. | ERNet Family: Hardware-Oriented CNN Models for Computational Imaging Using Block-Based Inference |
We study master integrals needed to compute the Higgs boson production cross section via gluon fusion in the infinite top quark mass limit, using a canonical form of differential equations for master integrals, recently identified by Henn, which makes their solution possible in a straightforward algebraic way. We apply the known criteria to derive such a suitable basis for all the phase space master integrals in afore mentioned process at next-to-next-to-leading order in QCD and demonstrate that the method is applicable to next-to-next-to-next-to-leading order as well by solving a non-planar topology. Furthermore, we discuss in great detail how to find an adequate basis using practical examples. Special emphasis is devoted to master integrals which are coupled by their differential equations. | Adequate bases of phase space master integrals for $gg \to h$ at NNLO and beyond |
This note sketches the extension of the basic characterisation theorems as the bisimulation-invariant fragment of first-order logic to modal logic with graded modalities and matching adaptation of bisimulation. We focus on showing expressive completeness of graded multi-modal logic for those first-order properties of pointed Kripke structures that are preserved under counting bisimulation equivalence among all or among just all finite pointed Kripke structures. | Graded modal logic and counting bisimulation |
We provide a list of equivalent conditions under which an additive operator acting on a space of smooth functions on a compact real interval is a multiple of the derivation. | Characterizations of derivations on spaces of smooth functions |
The pseudogap refers to an enigmatic state of matter with unusual physical properties found below a characteristic temperature $T^*$ in hole-doped high-temperature superconductors. Determining $T^*$ is critical for understanding this state. Here we study the simplest model of correlated electron systems, the Hubbard model, with cluster dynamical mean-field theory to find out whether the pseudogap can occur solely because of strong coupling physics and short nonlocal correlations. We find that the pseudogap characteristic temperature $T^*$ is a sharp crossover between different dynamical regimes along a line of thermodynamic anomalies that appears above a first-order phase transition, the Widom line. The Widom line emanating from the critical endpoint of a first-order transition is thus the organizing principle for the pseudogap phase diagram of the cuprates. No additional broken symmetry is necessary to explain the phenomenon. Broken symmetry states appear in the pseudogap and not the other way around. | Pseudogap temperature as a Widom line in doped Mott insulators |
For first passage percolation (FPP) on integer lattice with i.i.d. passage time distributions, in order to show existence of semi-infinite geodesics along a fixed direction, one requires unproven assumptions on the limiting shape. We consider FPP on two-dimensional integer lattice with i.i.d. passage times distributed as Durrett-Liggett class of measures. For this model, we show that along any direction in a deterministic angular sector (known as percolation cone), starting from every lattice point there exists an infinite geodesic along that direction and such directed geodesics coalesce almost surely. We prove that for this model, bi-infinite geodesics exist almost surely. Our proof does not require any assumption on the limiting shape. | Existence and coalescence of directed infinite geodesics in the percolation cone for Durrett-Liggett class of measures |
This paper extends and complements the existing theory for the parabolic Muckenhoupt weights motivated by one-sided maximal functions and a doubly nonlinear parabolic partial differential equation of $p$-Laplace type. The main results include characterizations for the limiting parabolic $A_\infty$ and $A_1$ classes by applying an uncentered parabolic maximal function with a time lag. Several parabolic Calder\'on-Zygmund decompositions, covering and chaining arguments appear in the proofs. | Characterizations of parabolic Muckenhoupt classes |
Homeostatic plasticity is a stabilizing mechanism commonly observed in real neural systems that allows neurons to maintain their activity around a functional operating point. This phenomenon can be used in neuromorphic systems to compensate for slowly changing conditions or chronic shifts in the system configuration. However, to avoid interference with other adaptation or learning processes active in the neuromorphic system, it is important that the homeostatic plasticity mechanism operates on time scales that are much longer than conventional synaptic plasticity ones. In this paper we present an ultra-low leakage circuit, integrated into an automatic gain control scheme, that can implement the synaptic scaling homeostatic process over extremely long time scales. Synaptic scaling consists in globally scaling the synaptic weights of all synapses impinging onto a neuron maintaining their relative differences, to preserve the effects of learning. The scheme we propose controls the global gain of analog log-domain synapse circuits to keep the neuron's average firing rate constant around a set operating point, over extremely long time scales. To validate the proposed scheme, we implemented the ultra-low leakage synaptic scaling homeostatic plasticity circuit in a standard 0.18 $\mu$m Complementary Metal-Oxide Semiconductor (CMOS) process, and integrated it in an array of dynamic synapses connected to an adaptive integrate and fire neuron. The circuit occupies a silicon area of 84 $\mu$m x 22 $\mu$m and consumes approximately 10.8 nW with a 1.8 V supply voltage. We present experimental results from the homeostatic circuit and demonstrate how it can be configured to exhibit time scales of up to 100 kilo-seconds, thanks to a controllable leakage current that can be scaled down to 0.45 atto-Amperes (2.8 electrons/s). | An Ultralow Leakage Synaptic Scaling Homeostatic Plasticity Circuit With Configurable Time Scales up to 100 ks |
We extend earlier investigations of heavy-light pseudoscalar mesons to the vector case, using a simple model in the context of the Dyson-Schwinger-Bethe-Salpeter approach. We investigate the effects of a dressed-quark-gluon vertex in a systematic fashion and illustrate and attempt to quantify corrections beyond the phenomenologically very useful and successful rainbow-ladder truncation. In particular we investigate dressed quark photon vertex in such a setup and make a prediction for the experimentally as yet unknown mass of the B_c*, which we obtain at 6.334 GeV well in line with predictions from other approaches. Furthermore, we combine a comprehensive set of results from the theory literature. The theory average for the mass of the B_c* meson is 6.336 +- 0.002 GeV. | Effects of a dressed quark-gluon vertex in vector heavy-light mesons and theory average of B(c)* meson mass |
BP Psc is an active late-type (sp:G9) star with unclear evolutionary status lying at high galactic latitude $b=-57^{\circ}$. It is also the source of the well collimated bipolar jet. We present results of the proper motion and radial velocity study of BP Psc outflow based on the archival $H\alpha$ imaging with the GMOS camera at 8.1-m Gemini-North telescope as well as recent imaging and long-slit spectroscopy with the SCORPIO multi-mode focal reducer at 6-m BTA telescope of SAO RAS. The 3D kinematics of the jet revealed the full spatial velocity up to $\sim$140 km$\cdot$s$^{-1}$ and allows us to estimate the distance to BP Psc system as $D=135\pm40$ pc. This distance leads to an estimation of the central source luminosity $L_*\approx1.2L_{\odot}$, indicating that it is the $\approx$1.3$M_{\odot}$ T Tauri star with an age $t\lesssim$ 7 Myr. We measured the electron density of order $N_e\sim10^2$ cm$^{-3}$ and mean ionization fraction $f\approx0.04$ within the jet knots and estimated upper limit of the mass-loss rate in NE lobe as $\dot{M}_{out}\approx1.2\cdot10^{-8}M_{\odot}\cdot yr^{-1}$. The physical characteristics of the outflow are typical for the low-excitation YSO jets and consistent with the magnetocentrifugal mechanism of its launching and collimation. Prominent wiggling pattern revealed in $H\alpha$ images allowed us to suppose the existence of a secondary substellar companion in a non-coplanar orbit and estimate its most plausible mass as $M_p\approx 30M_{Jup}$. We conclude that BP Psc is one of the closest to the Sun young jet-driving systems and its origin is possibly related to the episode of star formation triggered by expanding supershells in Second Galactic quadrant. | Jet from the enigmatic high-latitude star BP Psc and evolutionary status of its driving source |
We analyze 18 million rows of Wi-Fi access logs collected over a one year period from over 120,000 anonymized users at an inner-city shopping mall. The anonymized dataset gathered from an opt-in system provides users' approximate physical location, as well as Web browsing and some search history. Such data provides a unique opportunity to analyze the interaction between people's behavior in physical retail spaces and their Web behavior, serving as a proxy to their information needs. We find: (1) the use of Wi-Fi network maps the opening hours of the mall; (2) there is a weekly periodicity in users' visits to the mall; (3) around 60% of registered Wi-Fi users actively browse the Web and around 10% of them use Wi-Fi for accessing Web search engines; (4) people are likely to spend a relatively constant amount of time browsing the Web while their visiting duration may vary; (5) people tend to visit similar mall locations and Web content during their repeated visits to the mall; (6) the physical spatial context has a small but significant influence on the Web content that indoor users browse; (7) accompanying users tend to access resources from the same Web domains. | Analyzing Web Behavior in Indoor Retail Spaces |
Low sampling frequency challenges the exact identification of the continuous-time (CT) dynamical system from sampled data, even when its model is identifiable. The necessary and sufficient condition is proposed -- which is built from Koopman operator -- to the exact identification of the CT system from sampled data. The condition gives a Nyquist-Shannon-like critical frequency for exact identification of CT nonlinear dynamical systems with Koopman invariant subspaces: 1) it establishes a sufficient condition for a sampling frequency that permits a discretized sequence of samples to discover the underlying system and 2) it also establishes a necessary condition for a sampling frequency that leads to system aliasing that the underlying system is indistinguishable; and 3) the original CT signal does not have to be band-limited as required in the Nyquist-Shannon Theorem. The theoretical criterion has been demonstrated on a number of simulated examples, including linear systems, nonlinear systems with equilibria, and limit cycles. | A Sampling Theorem for Exact Identification of Continuous-time Nonlinear Dynamical Systems |
Lung cancer, particularly in its advanced stages, remains a leading cause of death globally. Though early detection via low-dose computed tomography (CT) is promising, the identification of high-risk factors crucial for surgical mode selection remains a challenge. Addressing this, our study introduces an Attention-Enhanced Graph Convolutional Network (AE-GCN) model to classify whether there are high-risk factors in stage I lung cancer based on the preoperative CT images. This will aid surgeons in determining the optimal surgical method before the operation. Unlike previous studies that relied on 3D patch techniques to represent nodule spatial features, our method employs a GCN model to capture the spatial characteristics of pulmonary nodules. Specifically, we regard each slice of the nodule as a graph vertex, and the inherent spatial relationships between slices form the edges. Then, to enhance the expression of nodule features, we integrated both channel and spatial attention mechanisms with a pre-trained VGG model for adaptive feature extraction from pulmonary nodules. Lastly, the effectiveness of the proposed method is demonstrated using real-world data collected from the hospitals, thereby emphasizing its potential utility in the clinical practice. | High-risk Factor Prediction in Lung Cancer Using Thin CT Scans: An Attention-Enhanced Graph Convolutional Network Approach |
Adiabatic quantum algorithms solve computational problems by slowly evolving a trivial state to the desired solution. On an ideal quantum computer, the solution quality improves monotonically with increasing circuit depth. By contrast, increasing the depth in current noisy computers introduces more noise and eventually deteriorates any computational advantage. What is the optimal circuit depth that provides the best solution? Here, we address this question by investigating an adiabatic circuit that interpolates between the paramagnetic and ferromagnetic ground states of the one-dimensional quantum Ising model. We characterize the quality of the final output by the density of defects $d$, as a function of the circuit depth $N$ and noise strength $\sigma$. We find that $d$ is well-described by the simple form $d_\mathrm{ideal}+d_\mathrm{noise}$, where the ideal case $d_\mathrm{ideal}\sim N^{-1/2}$ is controlled by the Kibble-Zurek mechanism, and the noise contribution scales as $d_\mathrm{noise}\sim N\sigma^2$. It follows that the optimal number of steps minimizing the number of defects goes as $\sim\sigma^{-4/3}$. We implement this algorithm on a noisy superconducting quantum processor and find that the dependence of the density of defects on the circuit depth follows the predicted non-monotonous behavior and agrees well with noisy simulations. Our work allows one to efficiently benchmark quantum devices and extract their effective noise strength $\sigma$. | Navigating the noise-depth tradeoff in adiabatic quantum circuits |
Radio continuum surveys of the Galactic plane can find and characterize HII regions, supernova remnants (SNRs), planetary nebulae (PNe), and extragalactic sources. A number of surveys at high angular resolution (<25") at different wavelengths exist to study the interstellar medium (ISM), but no comparable high-resolution and high-sensitivity survey exists at long radio wavelengths around 21cm. We observed a large fraction of the Galactic plane in the first quadrant of the Milky Way (l=14.0-67.4deg and |b| < 1.25deg) with the Karl G. Jansky Very Large Array (VLA) in the C-configuration covering six continuum spectral windows. These data provide a detailed view on the compact as well as extended radio emission of our Galaxy and thousands of extragalactic background sources. We used the BLOBCAT software and extracted 10916 sources. After removing spurious source detections caused by the sidelobes of the synthesised beam, we classified 10387 sources as reliable detections. We smoothed the images to a common resolution of 25" and extracted the peak flux density of each source in each spectral window (SPW) to determine the spectral indices $\alpha$ (assuming $I(\nu)\propto\nu^\alpha$). By cross-matching with catalogs of HII regions, SNRs, PNe, and pulsars, we found radio counterparts for 840 HII regions, 52 SNRs, 164 PNe, and 38 pulsars. We found 79 continuum sources that are associated with X-ray sources. We identified 699 ultra-steep spectral sources ($\alpha < -1.3$) that could be high-redshift galaxies. Around 9000 of the sources we extracted are not classified specifically, but based on their spatial and spectral distribution, a large fraction of them is likely to be extragalactic background sources. More than 7750 sources do not have counterparts in the SIMBAD database, and more than 3760 sources do not have counterparts in the NED database. | Radio continuum emission in the northern Galactic plane: Sources and spectral indices from the THOR survey |
Finding out the differences and commonalities between the knowledge of two parties is an important task. Such a comparison becomes necessary, when one party wants to determine how much it is worth to acquire the knowledge of the second party, or similarly when two parties try to determine, whether a collaboration could be beneficial. When these two parties cannot trust each other (for example, due to them being competitors) performing such a comparison is challenging as neither of them would be willing to share any of their assets. This paper addresses this problem for knowledge graphs, without a need for non-disclosure agreements nor a third party during the protocol. During the protocol, the intersection between the two knowledge graphs is determined in a privacy preserving fashion. This is followed by the computation of various metrics, which give an indication of the potential gain from obtaining the other parties knowledge graph, while still keeping the actual knowledge graph contents secret. The protocol makes use of blind signatures and (counting) Bloom filters to reduce the amount of leaked information. Finally, the party who wants to obtain the other's knowledge graph can get a part of such in a way that neither party is able to know beforehand which parts of the graph are obtained (i.e., they cannot choose to only get or share the good parts). After inspection of the quality of this part, the Buyer can decide to proceed with the transaction. The analysis of the protocol indicates that the developed protocol is secure against malicious participants. Further experimental analysis shows that the resource consumption scales linear with the number of statements in the knowledge graph. | Secure Evaluation of Knowledge Graph Merging Gain |
If $G$ is a group acting on a set $\Omega$ and $\alpha, \beta \in \Omega$, the digraph whose vertex set is $\Omega$ and whose arc set is the orbit $(\alpha, \beta)^G$ is called an {\em orbital digraph} of $G$. Each orbit of the stabiliser $G_\alpha$ acting on $\Omega$ is called a {\it suborbit} of $G$. A digraph is {\em locally finite} if each vertex is adjacent to at most finitely many other vertices. A locally finite digraph $\Gamma$ has more than one end if there exists a finite set of vertices $X$ such that the induced digraph $\Gamma \setminus X$ contains at least two infinite connected components; if there exists such a set containing precisely one element, then $\Gamma$ has {\em connectivity one}. In this paper we show that if $G$ is a primitive permutation group whose suborbits are all finite, possessing an orbital digraph with more than one end, then $G$ has a primitive connectivity-one orbital digraph, and this digraph is essentially unique. Such digraphs resemble trees in many respects, and have been fully characterised in a previous paper by the author. | Orbital graphs of infinite primitive permutation groups |
The diphoton excess around $m_S=750$ GeV observed at ATLAS and CMS can be interpreted as coming from $S=H$ and $A$, the neutral components of a second Higgs doublet. If so, then the consistency of the light Higgs decays with the Standard Model predictions provides upper bounds on the rates of $S\to VV, hZ, hh$ decays. On the other hand, if $h\to\tau\mu$ decay is established, then a lower bound on the rate of $S\to\tau\mu$ decay arises. Requiring that $\Gamma_S\lesssim45$ GeV gives both an upper and a lower bound on the rotation angle from the Higgs basis $(\Phi_v,\Phi_A)$ to the mass basis $(\Phi_h,\Phi_H)$. The charged scalar, with $m_{H^\pm}\simeq750$ GeV, is produced in association with a top quark, and can decay to $\mu^\pm\nu$, $\tau^\pm\nu$, $tb$ and $W^\pm h$ | The phenomenology of the di-photon excess and $h\to\tau\mu$ within 2HDM |
Hidden Markov models (HMMs) and partially observable Markov decision processes (POMDPs) provide useful tools for modeling dynamical systems. They are particularly useful for representing the topology of environments such as road networks and office buildings, which are typical for robot navigation and planning. The work presented here describes a formal framework for incorporating readily available odometric information and geometrical constraints into both the models and the algorithm that learns them. By taking advantage of such information, learning HMMs/POMDPs can be made to generate better solutions and require fewer iterations, while being robust in the face of data reduction. Experimental results, obtained from both simulated and real robot data, demonstrate the effectiveness of the approach. | Learning Geometrically-Constrained Hidden Markov Models for Robot Navigation: Bridging the Topological-Geometrical Gap |
Sequence-to-sequence models usually transfer all encoder outputs to the decoder for generation. In this work, by contrast, we hypothesize that these encoder outputs can be compressed to shorten the sequence delivered for decoding. We take Transformer as the testbed and introduce a layer of stochastic gates in-between the encoder and the decoder. The gates are regularized using the expected value of the sparsity-inducing L0penalty, resulting in completely masking-out a subset of encoder outputs. In other words, via joint training, the L0DROP layer forces Transformer to route information through a subset of its encoder states. We investigate the effects of this sparsification on two machine translation and two summarization tasks. Experiments show that, depending on the task, around 40-70% of source encodings can be pruned without significantly compromising quality. The decrease of the output length endows L0DROP with the potential of improving decoding efficiency, where it yields a speedup of up to 1.65x on document summarization tasks against the standard Transformer. We analyze the L0DROP behaviour and observe that it exhibits systematic preferences for pruning certain word types, e.g., function words and punctuation get pruned most. Inspired by these observations, we explore the feasibility of specifying rule-based patterns that mask out encoder outputs based on information such as part-of-speech tags, word frequency and word position. | On Sparsifying Encoder Outputs in Sequence-to-Sequence Models |
We study solutions corresponding to moving domain walls in the Randall-Sundrum universe. The bulk geometry is given by patching together black hole solutions in AdS$_5$, and the motion of the wall is determined from the junction equations. Observers on the wall interpret the motion as cosmological expansion or contraction. We describe the possible wall trajectories, and examine the consequences for localized gravity on the wall. | Dynamics of Anti-de Sitter Domain Walls |
In this paper we give the enumeration formulas for Euclidean self-dual skew-cyclic codes over finite fields when $(n,|\theta|)=1$ and for some cases when $(n,|\theta|)>1,$ where $n$ is the length of the code and $|\theta|$ is the order of automorphism $\theta.$ | A Note on The Enumeration of Euclidean Self-Dual Skew-Cyclic Codes over Finite Fields |
Recently developed model allows for simulations of electric field influence on the surface states. The results of slab simulations show considerable change of the energy of quantum states in the electric field, i.e. Stark Effect associated with the surface (SSSE - Surface States Stark Effect). Detailed studies of the GaN slabs demonstrate spatial variation of the conduction and valence band energy revealing real nature of SSSE phenomenon. It is shown that long range variation of the electric potential is in accordance with the change of the energy of the conduction and valence bands. However, at short distances from GaN(0001) surface, the valence band follows the potential change while the conduction states energy is increased due to quantum overlap repulsion by surface states. It is also shown that at clean GaN(0001) surface Fermi level is pinned at about 0.34 eV below the long range projection of the conduction band bottom and varies with the field by about 0.31 eV due to electron filling of the surface states. | On the nature of Surface States Stark Effect at clean GaN(0001) surface |
Since pulsating subdwarf B (sdBV or EC14026) stars were first discovered (Kilkenny et al, 1997), observational efforts have tried to realize their potential for constraining the interior physics of extreme horizontal branch (EHB) stars. Difficulties encountered along the way include uncertain mode identifications and a lack of stable pulsation mode properties. Here we report on Feige 48, an sdBV star for which follow-up observations have been obtained spanning more than four years, which shows some stable pulsation modes. We resolve the temporal spectrum into five stable pulsation periods in the range 340 to 380 seconds with amplitudes less than 1%, and two additional periods that appear in one dataset each. The three largest amplitude periodicities are nearly equally spaced, and we explore the consequences of identifying them as a rotationally split l=1 triplet by consulting with a representative stellar model. The general stability of the pulsation amplitudes and phases allows us to use the pulsation phases to constrain the timescale of evolution for this sdBV star. Additionally, we are able to place interesting limits on any stellar or planetary companion to Feige 48. | Observations of the pulsating subdwarf B star Feige 48: Constraints on evolution and companions |
The Hilbert-Huang Transform is a novel, adaptive approach to time series analysis that does not make assumptions about the data form. Its adaptive, local character allows the decomposition of non-stationary signals with hightime-frequency resolution but also renders it susceptible to degradation from noise. We show that complementing the HHT with techniques such as zero-phase filtering, kernel density estimation and Fourier analysis allows it to be used effectively to detect and characterize signals with low signal to noise ratio. | Methods for detection and characterization of signals in noisy data with the Hilbert-Huang Transform |
We present the results of long-term monitoring of the X-ray emission from the ultraluminous X-ray source XMMUJ122939.9+075333 in the extragalactic globular cluster RZ2109. The combination of the high X-ray luminosity, short term X-ray variability, X-ray spectrum, and optical emission suggest that this system is likely an accreting black hole in a globular cluster. To study the long-term behavior of the X-ray emission from this source, we analyze both new and archival Chandra and XMM-Newton observations, covering 16 years from 2000 to 2016. For all of these observations, we fit extracted spectra of RZ2109 with xspec models. The spectra are all dominated by a soft component, which is very soft with typical fit temperatures of T $\simeq$ 0.15 keV. The resulting X-ray fluxes show strong variability on short and long timescales. We also find that the X-ray spectrum often shows no significant change even with luminosity changes as large as a factor of five. | X-ray Variability from the Ultraluminous Black Hole Candidate X-ray Binary in the Globular Cluster RZ 2109 |
Speech codec enhancement methods are designed to remove distortions added by speech codecs. While classical methods are very low in complexity and add zero delay, their effectiveness is rather limited. Compared to that, DNN-based methods deliver higher quality but they are typically high in complexity and/or require delay. The recently proposed Linear Adaptive Coding Enhancer (LACE) addresses this problem by combining DNNs with classical long-term/short-term postfiltering resulting in a causal low-complexity model. A short-coming of the LACE model is, however, that quality quickly saturates when the model size is scaled up. To mitigate this problem, we propose a novel adatpive temporal shaping module that adds high temporal resolution to the LACE model resulting in the Non-Linear Adaptive Coding Enhancer (NoLACE). We adapt NoLACE to enhance the Opus codec and show that NoLACE significantly outperforms both the Opus baseline and an enlarged LACE model at 6, 9 and 12 kb/s. We also show that LACE and NoLACE are well-behaved when used with an ASR system. | NoLACE: Improving Low-Complexity Speech Codec Enhancement Through Adaptive Temporal Shaping |
The theory of Non-Relativistic Quantum Mechanics was created (or discovered) back in the 1920's mainly by Schr\"odinger and Heisenberg, but it is fair enough to say that a more modern and unified approach to the subject was introduced by Dirac and Jordan with their (intrinsic) Transformation Theory. In his famous text book on quantum mechanics [1], Dirac introduced his well-known bra and ket notation and a view that even Einstein (who was, as well known, very critical towards the general quantum physical world-view) considered the most elegant presentation of the theory at that time[2]. One characteristic of this formulation is that the observables of position and momentum are truly treated equally so that an intrinsic phase-space approach seems a natural course to be taken. In fact, we may distinguish at least two different quantum mechanical approaches to the structure of the quantum phase space: The Weyl-Wigner (WW) formalism and the advent of the theory of Coherent States (CS). The Weyl-Wigner formalism has had many applications ranging from the discussion of the Classical/Quantum Mechanical transition and quantum chaos to signal analysis[3,4]. The Coherent State formalism had a profound impact on Quantum Optics and during the course of time has found applications in diverse areas such as geometric quantization, wavelet and harmonic analysis [5]. In this chapter we present a compact review of these formalisms (with also a more intrinsic and coordinate independent notation) towards some non-standard and up-to-date applications such as modular variables and weak values. | Classical Structures in Quantum Mechanics and Applications |
Orthogonality regularization has been developed to prevent deep CNNs from training instability and feature redundancy. Among existing proposals, kernel orthogonality regularization enforces orthogonality by minimizing the residual between the Gram matrix formed by convolutional filters and the orthogonality matrix. We propose a novel measure for achieving better orthogonality among filters, which disentangles diagonal and correlation information from the residual. The model equipped with the measure under the principle of imposing strict orthogonality between filters surpasses previous regularization methods in near-orthogonality. Moreover, we observe the benefits of improved strict filter orthogonality in relatively shallow models, but as model depth increases, the performance gains in models employing strict kernel orthogonality decrease sharply. Furthermore, based on the observation of the potential conflict between strict kernel orthogonality and growing model capacity, we propose a relaxation theory on kernel orthogonality regularization. The relaxed kernel orthogonality achieves enhanced performance on models with increased capacity, shedding light on the burden of strict kernel orthogonality on deep model performance. We conduct extensive experiments with our kernel orthogonality regularization toolkit on ResNet and WideResNet in CIFAR-10 and CIFAR-100. We observe state-of-the-art gains in model performance from the toolkit, which includes both strict orthogonality and relaxed orthogonality regularization, and obtain more robust models with expressive features. These experiments demonstrate the efficacy of our toolkit and subtly provide insights into the often overlooked challenges posed by strict orthogonality, addressing the burden of strict orthogonality on capacity-rich models. | Towards Better Orthogonality Regularization with Disentangled Norm in Training Deep CNNs |
Wall-resolved large-eddy simulations are performed to study the impact of spanwise traveling transversal surface waves in zero-pressure gradient turbulent boundary layer flow. Eighty variations of wavelength, period, and amplitude of the space- and time-dependent sinusoidal wall motion are considered for a boundary layer at a momentum thickness based Reynolds number of $Re_\theta = 1000$. The results show a strong decrease of friction drag of up to $26\,\%$ and considerable net power saving of up to $10\,\%$. However, the highest net power saving does not occur at the maximum drag reduction. The drag reduction is modeled as a function of the actuation parameters by support vector regression using the LES data. A substantial attenuation of the near-wall turbulence intensity and especially a weakening of the near-wall velocity streaks are observed. Similarities between the current actuation technique and the method of a spanwise oscillating wall without any normal surface deflection are reported. In particular, the generation of a directional spanwise oscillating Stokes layer is found to be related to skin-friction reduction. | Drag Reduction and Energy Saving by Spanwise Traveling Transversal Surface Waves for Flat Plate Flow |
The aim of this paper is to provide a plausible explanation for the large amplitude microlensing events observed in the cluster lensed quasar system SDSS J1004+4112. The microlensed quasar images appear to lie well clear of the stellar population of the cluster, raising the possibility that the cluster dark matter is composed of compact bodies which are responsible for the observed microlensing. In the first part of the paper we establish the exact structure of the difference light curves attributed to microlensing from photometric monitoring programmes in the literature. We then show from measures of surface brightness that the probability of microlensing by stars in the cluster is negligibly small. Finally we relax our assumption that the cluster dark matter is in the form of smoothly distributed particles, but instead is made up of compact bodies. We then use computer simulations of the resulting magnification pattern to estimate the probability of microlensing. Our results show that for a range of values for source size and lens mass the observed large microlensing amplitude is consistent with the statistics from the simulations. We conclude that providing the assumption of smoothly distributed dark matter is relaxed, the observed large amplitude microlensing can be accounted for by allowing the cluster dark matter to be in the form of solar mass compact bodies. We further conclude that the most plausible identity for these bodies is primordial black holes. | SDSS J1004+4112: the case for a galaxy cluster dominated by primordial black holes |
In 2000, M. Burger and S. Mozes introduced universal groups acting on trees with a prescribed local action. We generalize this concept to groups acting on right-angled buildings. When the right-angled building is thick and irreducible of rank at least 2 and each of the local permutation groups is transitive and generated by its point stabilizers, we show that the corresponding universal group is a simple group. When the building is locally finite, these universal groups are compactly generated totally disconnected locally compact groups, and we describe the structure of the maximal compact open subgroups of the universal groups as a limit of generalized wreath products. | Universal groups for right-angled buildings |
We show how to define gauge-covariant coordinate transformations on a noncommuting space. The construction uses the Seiberg-Witten equation and generalizes similar results for commuting coordinates. | Covariant Coordinate Transformations on Noncommutative Space |
This study introduces a novel hybrid Cerenkov-scintillation dosimeter which is intended to be used for irradiation angle measurements based on the Cerenkov angular dependency. First measurements aimed at validating the ability to account for the Cerenkov electron energy spectrum dependency by simultaneously measuring the deposited dose, thus isolating signal variations resulting from the angular dependency. The Cerenkov probe is composed of a 10-mm long sensitive volume of clear PMMA optical fiber separated by an absorptive filter from a 1-mm diameter transport fiber. Filtered and raw Cerenkov signals from the sensitive volume and transport fiber, respectively, were collected using the Hyperscint RP-200 scintillation dosimetry platform. The total signal was unmixed using a hyperspectral approach. Dose calibration of the detector signal was accomplished with photon and electron beams. Using a solid-water phantom, measurements at fixed incident angles covering a wide range of doses and output factors were realized. For fixed incident angle, signal characterization of the Cerenkov detector displays a linear dose-light relationship. As expected, the sensitive volume signal was found to be energy dependent. Output factors were accurately measured within 0.8 % for field size up to 25 cm x 25 cm with both photons and electrons. First validation of the Cerenkov angular dependency shows a linear dose-light relationship for the whole range of angles tested. As expected, the Cerenkov signal intensity per dose unit varies based on the irradiation angle due to the angular dependency. Results showed that using calibration conditions where the electron energy spectrum is similar to the measurement conditions allows to rely on deposited dose to account for this dependency. These preliminary results constitute a first step toward experimental irradiation angle measurements. | External beam irradiation angle measurement using Cerenkov emission I: Signal dependencies consideration |
The recent discovery of topological insulator (TI) offers new opportunities for the development of thermoelectrics, because many TIs (like Bi$_2$Te$_3$) are excellent thermoelectric (TE) materials. In this review, we will first describe the general TE properties of TIs and show that the coexistence of the bulk and boundary states in TIs introduces unusual TE properties, including strong size effects and anomalous Seebeck effect. Importantly, the TE figure of merit $zT$ of TIs is no longer an intrinsic property, but depends strongly on the geometric size. The geometric parameters of two-dimensional TIs can be tuned to enhance $zT$ to be significantly greater than 1. Then a few proof-of-principle experiments on three-dimensional TIs will be discussed, which observed unconventional TE phenomena that are closely related to the topological nature of the materials. However, current experiments indicate that the metallic surface states, if their advantage of high mobility is not fully utilized, would be detrimental to TE performance. Finally we provide an outlook for future work on topological materials, which offers great possibilities to discover exotic TE effects and may lead to significant breakthroughs in improving $zT$. | Thermoelectric Effects and Topological Insulators |
We examine the Hessian potential that derives the flat Minkowski spacetime in $(1+1)$-dimension. The entanglement thermodynamics by the Hessian geometry enables us to obtain the entanglement entropy of a corresponding quantum state by means of holography. We find that the positivity of the entropy leads to the presence of past and future causal cones in the Minkowski spacetime. We also find that the quantum state is equivalent to the thermofield-double state, and then the entropy is proportional to the temperature. The proportionality is consistent with previous holographic works. The present Hessian geometrical approach captures that the causality in the classical side is converted into quantum entanglement inherent in the thermofield dynamics. | Correspondence between causality in flat Minkowski spacetime and entanglement in thermofield-double state: Hessian-geometrical study |
Enhancing photon detection efficiency and time resolution in photodetectors in the entire visible range is critical to improve the image quality of time-of-flight (TOF)-based imaging systems and fluorescence lifetime imaging (FLIM). In this work, we evaluate the gain, detection efficiency, and timing performance of avalanche photodiodes (APD) with photon trapping nanostructures for photons with 450 and 850 nm wavelengths. At 850 nm wavelength, our photon trapping avalanche photodiodes showed 30 times higher gain, an increase from 16% to >60% enhanced absorption efficiency, and a 50% reduction in the full width at half maximum (FWHM) pulse response time close to the breakdown voltage. At 450 nm wavelength, the external quantum efficiency increased from 54% to 82%, while the gain was enhanced more than 20-fold. Therefore, silicon APDs with photon trapping structures exhibited a dramatic increase in absorption compared to control devices. Results suggest very thin devices with fast timing properties and high absorption between the near-ultraviolet and the near infrared region can be manufactured for high-speed applications in biomedical imaging. This study paves the way towards obtaining single photon detectors with photon trapping structures with gains above 10^6 for the entire visible range | Avalanche Photodetectors with Photon Trapping Structures for Biomedical Imaging Applications |
The pharmaceutical success of atorvastatin (ATV), a widely employed drug against the "bad" cholesterol (LDL) and cardiovascular diseases, traces back to its ability to scavenge free radicals. Unfortunately, information on its antioxidant properties is missing or unreliable. Here, we report detailed quantum chemical results for ATV and its ortho- and para-hydroxy metabolites (o-ATV, p-ATV) in the methanolic phase. They comprise global reactivity indices, bond order indices, and spin densities as well as all relevant enthalpies of reaction (bond dissociation BDE, ionization IP and electron attachment EA, proton detachment PDE and proton affinity PA, and electron transfer ETE). With these properties in hand, we can provide the first theoretical explanation of the experimental finding that, due to their free radical scavenging activity, ATV hydroxy metabolites rather than the parent ATV, have substantial inhibitory effect on LDL and the like. Surprisingly (because it is contrary to the most cases currently known), we unambiguously found that HAT (direct hydrogen atom transfer) rather than SPLET (sequential proton loss electron transfer) or SET-PT (stepwise electron transfer proton transfer) is the thermodynamically preferred pathway by which o-ATV and p-ATV in methanolic phase can scavenge DPPH$^\bullet$ (1,1-diphenyl-2-picrylhydrazyl) radicals. From a quantum chemical perspective, the ATV's species investigated are surprising because of the nontrivial correlations between bond dissociation energies, bond lengths, bond order indices and pertaining stretching frequencies, which do not fit the framework of naive chemical intuition. | Why Ortho- and Para-Hydroxy Metabolites Can Scavenge Free Radicals that the Parent Atorvastatin Cannot? Important Pharmacologic Insight from Quantum Chemistry |
We consider a general mixed-integer convex program. We first develop an algorithm for solving this problem, and show its finite convergence. We then develop a finitely convergent decomposition algorithm that separates binary variables from integer and continuous variables. The integer and continuous variables are treated as second stage variables. An oracle for generating a parametric cut under a subgradient decomposition assumption is developed. The decomposition algorithm is applied to show that two-stage (distributionally robust) convex programs with binary variables in the first stage can be solved to optimality within a cutting plane framework. For simplicity, the paper assumes that certain convex programs generated in the course of the algorithm are solved to optimality. | A Finitely Convergent Cutting Plane, and a Bender's Decomposition Algorithm for Mixed-Integer Convex and Two-Stage Convex Programs using Cutting Planes |
Janus transition metal dichalcogenides are an emerging class of atomically thin materials with engineered broken mirror symmetry that gives rise to long-lived dipolar excitons, Rashba splitting, and topologically protected solitons. They hold great promise as a versatile nonlinear optical platform due to their broadband harmonic generation tunability, ease of integration on photonic structures, and nonlinearities beyond the basal crystal plane. Here, we study second and third harmonic generation in MoSSe and WSSe Janus monolayers. We use polarization-resolved spectroscopy to map the full second-order susceptibility tensor of MoSSe, including its out-of-plane components. In addition, we measure the effective third-order susceptibility, and the second-order nonlinear dispersion close to exciton resonances for both MoSSe and WSSe at room and cryogenic temperatures. Our work sets a bedrock for understanding the nonlinear optical properties of Janus transition metal dichalcogenides and probing their use in the next-generation on-chip multifaceted photonic devices. | Nonlinear Dispersion Relation and Out-of-Plane Second Harmonic Generation in MoSSe and WSSe Janus Monolayers |
Low-energy nuclear structure is not sensitive enough to resolve fine details of nucleon-nucleon (NN) interaction. Insensitivity of infrared physics to the details of short-range strong interaction allows for consistent, free of ultraviolet divergences, formulation of local theory at the level of local energy density functional (LEDF) including, on the same footing, both particle-hole as well as particle-particle channels. Major difficulty is related to parameterization of the nuclear LEDF and its density dependence. It is argued that structural simplicity of terminating or isomeric states offers invaluable source of informations that can be used for fine-tuning of the NN interaction in general and the nuclear LEDF parameters in particular. Practical applications of terminating states at the level of LEDF and nuclear shell-model are discussed. | Probing effective nucleon-nucleon interaction at band termination |
We provide a geometric model for the classifying space of automorphism groups of Hermitian vector bundles over a ring with involution $R$ such that $\frac{1}{2} \in R$; this generalizes a result of Schlichting-Tripathi \cite{SchTri}. We then prove a periodicity theorem for Hermitian $K$-theory and use it to construct an $E_\infty$ motivic ring spectrum $\mathbf{KR}^{\mathrm{alg}}$ representing homotopy Hermitian $K$-theory. From these results, we show that $\mathbf{KR}^{\mathrm{alg}}$ is stable under base change, and cdh descent for homotopy Hermitian $K$-theory of rings with involution is a formal consequence. | Cdh Descent for Homotopy Hermitian $K$-Theory of Rings with Involution |
We bootstrap the three-point form factor of the chiral stress-tensor multiplet in planar $\mathcal{N}=4$ supersymmetric Yang-Mills theory at six, seven, and eight loops, using boundary data from the form factor operator product expansion. This may represent the highest perturbative order to which multi-variate quantities in a unitary four-dimensional quantum field theory have been computed. In computing this form factor, we observe and employ new restrictions on pairs and triples of adjacent letters in the symbol. We provide details about the function space required to describe the form factor through eight loops. Plotting the results on various lines provides striking numerical evidence for a finite radius of convergence of perturbation theory. By the principle of maximal transcendentality, our results are expected to give the highest weight part of the $g g \rightarrow H g$ and $H \rightarrow ggg$ amplitudes in the heavy-top limit of QCD through eight loops. These results were also recently used to discover a new antipodal duality between this form factor and a six-point amplitude in the same theory. | Bootstrapping a Stress-Tensor Form Factor through Eight Loops |
In this paper we demonstrate that asymmetric hyperbolic metamaterials (AHM) can produce strongly directive thermal emission in far-field zone, which exceeds Planck's limit. Asymmetry is inherent in an uniaxial medium, whose optical axes are tilted with respect to medium interfaces and appears as a difference in properties of waves, propagating upward and downward with respect to the interface. It's known that a high density of states (DOS) for certain photons takes place in usual hyperbolic metamaterials, but emission of them into a smaller number in vacuum is preserved by the total internal reflection. However, the use of AHM enhance the efficiency of coupling of the waves in AHM with the waves in free space that results in Super-Planckian far-field thermal emission in certain directions. Different plasmonic metamaterials can be used for realization of AHM. As example, thermal emission from AHM, based on graphene multilayer, is discussed. | Super-Planckian far-zone thermal emission from asymmetric hyperbolic metamaterials |
PageRank is a fundamental link analysis algorithm that also functions as a key representative of the performance of Sparse Matrix-Vector (SpMV) multiplication. The traditional PageRank implementation generates fine granularity random memory accesses resulting in large amount of wasteful DRAM traffic and poor bandwidth utilization. In this paper, we present a novel Partition-Centric Processing Methodology (PCPM) to compute PageRank, that drastically reduces the amount of DRAM communication while achieving high sustained memory bandwidth. PCPM uses a Partition-centric abstraction coupled with the Gather-Apply-Scatter (GAS) programming model. By carefully examining how a PCPM based implementation impacts communication characteristics of the algorithm, we propose several system optimizations that improve the execution time substantially. More specifically, we develop (1) a new data layout that significantly reduces communication and random DRAM accesses, and (2) branch avoidance mechanisms to get rid of unpredictable data-dependent branches. We perform detailed analytical and experimental evaluation of our approach using 6 large graphs and demonstrate an average 2.7x speedup in execution time and 1.7x reduction in communication volume, compared to the state-of-the-art. We also show that unlike other GAS based implementations, PCPM is able to further reduce main memory traffic by taking advantage of intelligent node labeling that enhances locality. Although we use PageRank as the target application in this paper, our approach can be applied to generic SpMV computation. | Accelerating PageRank using Partition-Centric Processing |