text
stringlengths 133
1.92k
| summary
stringlengths 24
228
|
---|---|
In the first part of the paper I will present a brief review on the Hardy-Weinberg equilibrium and its formulation in projective algebraic geometry. In the second and last part I will discuss examples and generalizations on the topic. | Considerations on the genetic equilibrium law |
In quasi-persistent neutron star transients, long outbursts cause the neutron star crust to be heated out of thermal equilibrium with the rest of the star. During quiescence, the crust then cools back down. Such crustal cooling has been observed in two quasi-persistent sources: KS 1731-260 and MXB 1659-29. Here we present an additional Chandra observation of MXB 1659-29 in quiescence, which extends the baseline of monitoring to 6.6 yr after the end of the outburst. This new observation strongly suggests that the crust has thermally relaxed, with the temperature remaining consistent over 1000 days. Fitting the temperature cooling curve with an exponential plus constant model we determine an e-folding timescale of 465 +/- 25 days, with the crust cooling to a constant surface temperature of kT = 54 +/- 2 eV (assuming D=10 kpc). From this, we infer a core temperature in the range 3.5E7-8.3E7 K (assuming D=10 kpc), with the uncertainty due to the surface composition. Importantly, we tested two neutron star atmosphere models as well as a blackbody model, and found that the thermal relaxation time of the crust is independent of the chosen model and the assumed distance. | Cooling of the crust in the neutron star low-mass X-ray binary MXB 1659-29 |
The overheads of classical decoding for quantum error correction on superconducting quantum systems grow rapidly with the number of logical qubits and their correction code distance. Decoding at room temperature is bottle-necked by refrigerator I/O bandwidth while cryogenic on-chip decoding is limited by area/power/thermal budget. To overcome these overheads, we are motivated by the observation that in the common case, error signatures are fairly trivial with high redundancy/sparsity, since the error correction codes are over-provisioned to correct for uncommon worst-case complex scenarios (to ensure substantially low logical error rates). If suitably exploited, these trivial signatures can be decoded and corrected with insignificant overhead, thereby alleviating the bottlenecks described above, while still handling the worst-case complex signatures by state-of-the-art means. Our proposal, targeting Surface Codes, consists of: 1) Clique: A lightweight decoder for decoding and correcting trivial common-case errors, designed for the cryogenic domain. The decoder is implemented for SFQ logic. 2) A statistical confidence-based technique for off-chip decoding bandwidth allocation, to efficiently handle rare complex decodes which are not covered by the on-chip decoder. 3) A method for stalling circuit execution, for the worst-case scenarios in which the provisioned off-chip bandwidth is insufficient to complete all requested off-chip decodes. In all, our proposal enables 70-99+% off-chip bandwidth elimination across a range of logical and physical error rates, without significantly sacrificing the accuracy of state-of-the-art off-chip decoding. By doing so, it achieves 10-10000x bandwidth reduction over prior off-chip bandwidth reduction techniques. Furthermore, it achieves a 15-37x resource overhead reduction compared to prior on-chip-only decoding. | Better Than Worst-Case Decoding for Quantum Error Correction |
Faults in HVAC systems degrade thermal comfort and energy efficiency in buildings and have received significant attention from the research community, with data driven methods gaining in popularity. Yet the lack of labeled data, such as normal versus faulty operational status, has slowed the application of machine learning to HVAC systems. In addition, for any particular building, there may be an insufficient number of observed faults over a reasonable amount of time for training. To overcome these challenges, we present a transfer methodology for a novel Bayesian classifier designed to distinguish between normal operations and faulty operations. The key is to train this classifier on a building with a large amount of sensor and fault data (for example, via simulation or standard test data) then transfer the classifier to a new building using a small amount of normal operations data from the new building. We demonstrate a proof-of-concept for transferring a classifier between architecturally similar buildings in different climates and show few samples are required to maintain classification precision and recall. | Transfer Learning for HVAC System Fault Detection |
We theoretically show that the three-dimensional (3D) topological insulator (TI)/thin-film ferromagnetic metal (FMM) bilayer structure is possible to be a quantum anomalous Hall (QAH) insulator with a wide global band gap. Studying the band structure and the weight distributions of eigenstates, we demonstrate that the attachment of a metallic thin-film on the 3DTI can shift the topologically non-trivial state into the metal layers due to the hybridization of bands around the original Dirac point. By introducing the magnetic exchange interaction in the thin-film layers, we compute the anomalous Hall conductivity and magnetic anisotropy of the bilayer structure to suggest the appearance of wider gap realizing QAH effect than usual materials, such as magnetically doped thin-films of 3DTI and 3DTI/ferromagnetic insulator heterostructures. Our results indicate that the 3DTI/thin-film FMM bilayer structure may implement the QAH effect even at room temperature, which will pave a way to the experimental realization of other exotic topological quantum phenomena. | Quantum Anomalous Hall Effect in Three-dimensional Topological Insulator/Thin-film Ferromagnetic Metal Bilayer Structure |
We demonstrate a one-dimensional magnetic system can exhibit a Cantor-type spectrum using an example of a chain graph with $\delta$ coupling at the vertices exposed to a magnetic field perpendicular to the graph plane and varying along the chain. If the field grows linearly with an irrational slope, measured in terms of the flux through the loops of the chain, we demonstrate the character of the spectrum relating it to the almost Mathieu operator. | Cantor spectra of magnetic chain graphs |
We report on cw measurements of probe beam absorption and four-wave-mixing spectra in a $^{85}$Rb magneto-optical trap taken while the trap is in operation. The trapping beams are used as pump light. We concentrate on the central feature of the spectra at small pump-probe detuning and attribute its narrow resonant structures to the superposition of Raman transitions between light-shifted sublevels of the ground atomic state and to atomic recoil processes. These two contributions have different dependencies on trap parameters and we show that the former is inhomogeneously broadened. The strong dependence of the spectra on the probe-beam polarization indicates the existence of large optical anisotropy of the cold-atom sample, which is attributed to the recoil effects. We point out that the recoil-induced resonances can be isolated from other contributions, making pump-probe spectroscopy a highly sensitive diagnostic tool for atoms in a working MOT. | Probe spectroscopy in an operating magneto-optical trap: the role of Raman transitions between discrete and continuum atomic states |
The electron-hole conversion at the normal-metal superconductor interface in inversion-symmetric Weyl semimetals is investigated with an effective two-band model. We find that the specular Andreev reflection of Weyl fermions has two unusual features. The Andreev conductance for s-wave BCS pairing states is anisotropic, depending on the angle between the line connecting a pair of Weyl points and the normal of the junction, due to opposite chirality carried by the paired electrons. For the Fulde-Ferrell-Larkin-Ovchinnikov pairing states, the Andreev reflection spectrum is isotropic and is independent of the finite momentum of the Cooper pairs. | Specular Andreev Reflection in Inversion-symmetric Weyl-semimetals |
We are interested in the problem of conversational analysis and its application to the health domain. Cognitive Behavioral Therapy is a structured approach in psychotherapy, allowing the therapist to help the patient to identify and modify the malicious thoughts, behavior, or actions. This cooperative effort can be evaluated using the Working Alliance Inventory Observer-rated Shortened - a 12 items inventory covering task, goal, and relationship - which has a relevant influence on therapeutic outcomes. In this work, we investigate the relation between this alliance inventory and the spoken conversations (sessions) between the patient and the psychotherapist. We have delivered eight weeks of e-therapy, collected their audio and video call sessions, and manually transcribed them. The spoken conversations have been annotated and evaluated with WAI ratings by professional therapists. We have investigated speech and language features and their association with WAI items. The feature types include turn dynamics, lexical entrainment, and conversational descriptors extracted from the speech and language signals. Our findings provide strong evidence that a subset of these features are strong indicators of working alliance. To the best of our knowledge, this is the first and a novel study to exploit speech and language for characterising working alliance. | What can Speech and Language Tell us About the Working Alliance in Psychotherapy |
The well known elliptic discrete Painlev\'e equation of Sakai is constructed by a standard translation on the $E_8^{(1)}$ lattice, given by nearest neighbor vectors. In this paper, we give a new elliptic discrete Painlev\'e equation obtained by translations along next-nearest-neighbor vectors. This equation is a generic (8-parameter) version of a 2-parameter elliptic difference equation found by reduction from Adler's partial difference equation, the so-called Q4 equation. We also provide a projective reduction of the well known equation of Sakai. | Elliptic Painlev\'e equations from next-nearest-neighbor translations on the $E_8^{(1)}$ lattice |
Measurement of Bose-Einstein or HBT correlations of identified charged particles provide insight into the space-time structure of particle emitting sources in heavy-ion collisions. In this paper we present the latest results from the RHIC PHENIX experiment on such measurements. | PHENIX results on Bose-Einstein correlation functions |
Teaching a computer to read and answer general questions pertaining to a document is a challenging yet unsolved problem. In this paper, we describe a novel neural network architecture called the Reasoning Network (ReasoNet) for machine comprehension tasks. ReasoNets make use of multiple turns to effectively exploit and then reason over the relation among queries, documents, and answers. Different from previous approaches using a fixed number of turns during inference, ReasoNets introduce a termination state to relax this constraint on the reasoning depth. With the use of reinforcement learning, ReasoNets can dynamically determine whether to continue the comprehension process after digesting intermediate results, or to terminate reading when it concludes that existing information is adequate to produce an answer. ReasoNets have achieved exceptional performance in machine comprehension datasets, including unstructured CNN and Daily Mail datasets, the Stanford SQuAD dataset, and a structured Graph Reachability dataset. | ReasoNet: Learning to Stop Reading in Machine Comprehension |
The action of the discrete symmetries on the scalar mode functions of the de Sitter spacetime is studied. The invariance with respect to a combination of discrete symmetries is put forward as a criterion to select a certain vacuum out of a family of vacua. The implications of the choices for eigenfunctions of various common sets of commuting operators are explored, and the results are compared to the different original choices from the literature that have been utilized in order to exhibit thermal effects. | Discrete symmetries determining scalar quantum modes on the de Sitter spacetime |
Many superconducting qubit systems use the dispersive interaction between the qubit and a coupled harmonic resonator to perform quantum state measurement. Previous works have found that such measurements can induce state transitions in the qubit if the number of photons in the resonator is too high. We investigate these transitions and find that they can push the qubit out of the two-level subspace, and that they show resonant behavior as a function of photon number. We develop a theory for these observations based on level crossings within the Jaynes-Cummings ladder, with transitions mediated by terms in the Hamiltonian that are typically ignored by the rotating wave approximation. We find that the most important of these terms comes from an unexpected broken symmetry in the qubit potential. We confirm the theory by measuring the photon occupation of the resonator when transitions occur while varying the detuning between the qubit and resonator. | Measurement-induced state transitions in a superconducting qubit: Beyond the rotating wave approximation |
We compute fragmentation corrections to hadroproduction of the quarkonium states $J/\psi$, $\chi_{cJ}$, and $\psi(2S)$ at leading power in $m_c^2/p_T^2$, where $m_c$ is the charm-quark mass and $p_T$ is the quarkonium transverse momentum. The computation is carried out in the framework of nonrelativistic QCD. We include corrections to the parton-production cross sections through next-to-leading order in the strong coupling $\alpha_s$ and corrections to the fragmentation functions through second order in $\alpha_s$. We also sum leading logarithms of $p_T^2/m_c^2$ to all orders in perturbation theory. We find that, when we combine these leading-power fragmentation corrections with fixed-order calculations through next-to-leading order in $\alpha_s$, we are able to obtain good fits for $p_T\geq 10$ GeV to hadroproduction cross sections that were measured at the Tevatron and the LHC. Using values for the nonperturbative long-distance matrix elements that we extract from the cross-section fits, we make predictions for the polarizations of the quarkonium states. We obtain good agreement with measurements of the polarizations, with the exception of the CDF Run II measurement of the prompt $J/\psi$ polarization, for which the agreement is only fair. In the predictions for the prompt-$J/\psi$ cross sections and polarizations, we take into account feeddown from the $\chi_{cJ}$ and $\psi(2S)$ states. | Fragmentation contributions to hadroproduction of prompt $J/\psi$, $\chi_{cJ}$, and $\psi(2S)$ states |
The fundamental link between entanglement dynamics and non-equilibrium statistics in isolated quantum systems has been established in theory and confirmed via experiment. However, the understanding of several consequential phenomena, such as the Many-Body Localization (MBL), has been obstructed by the lack of a systematic approach to obtain many-body entanglement dynamics. This paper introduces the Quantum Correlation Transfer Function (QCTF) approach to entanglement dynamics in many-body quantum systems and employs this new framework to demonstrate the mechanism of MBL in disordered spin chains. We show that in the QCTF framework, the entanglement dynamics of two-level constituent particles of a many-body quantum system can be fully characterized directly from the system's Hamiltonian, which circumvents the bottleneck of calculating the many-body system's time-evolution. By employing the QCTF-based approach to entanglement dynamics, we demonstrate MBL dynamics in disordered Heisenberg spin chains through the suppressed quasi-periodic spin's entanglement evolution after a quench from an anti-ferromagnetic state. Furthermore, we prove the validity of a previous fundamental conjecture regarding the MBL phase by showing that in strongly-disordered spin chains with short-range interactions, the quantum correlation between particles is exponentially attenuated with respect to the site-to-site distance. Moreover, we obtain the lowest possible amplitude of the quasi-periodic spin's entanglement as a function of disorder in the chain. The QCTF analysis is verified by exact numerical simulation of the system's evolution. We also show that QCTF provides a new foundation to study the Eigenstate Thermalization Hypothesis (ETH). The QCTF methodology can be extended in various ways to address general issues regarding non-equilibrium quantum thermodynamics in spin lattices with different geometries. | Directly Revealing Entanglement Dynamics through Quantum Correlation Transfer Functions with Resultant Demonstration of the Mechanism of Many-Body Localization |
To ensure the light (emitted far away from the source of gravity) can arrive at the null infinity of an asymptotically flat spacetime, it is shown that the rate of Bondi mass aspect has to satisfy some conditions. In Einstein gravity theory, we find the sufficient condition implies a bound on the Bondi mass $m$, i.e., $|\dot{m}|\leqslant 0.3820~c^3/G$. This provides a new perspective on Dyson's maximum luminosity. However, in Brans-Dicke theory, the sufficient condition depends on the behavior of the radiation field of the scalar. Specifically, the photons can escape to the null infinity when the scalar gravitational radiation is not too large and the mass loss is not too fast. | A Bound on the Rate of Bondi Mass Loss |
Feature extraction is an efficient approach for alleviating the issue of dimensionality in high-dimensional data. As a popular self-supervised learning method, contrastive learning has recently garnered considerable attention. In this study, we proposed a unified framework based on a new perspective of contrastive learning (CL) that is suitable for both unsupervised and supervised feature extraction. The proposed framework first constructed two CL graph for uniquely defining the positive and negative pairs. Subsequently, the projection matrix was determined by minimizing the contrastive loss function. In addition, the proposed framework considered both similar and dissimilar samples to unify unsupervised and supervised feature extraction. Moreover, we propose the three specific methods: unsupervised contrastive learning method, supervised contrastive learning method 1 ,and supervised contrastive learning method 2. Finally, the numerical experiments on five real datasets demonstrated the superior performance of the proposed framework in comparison to the existing methods. | Unified Framework for Feature Extraction based on Contrastive Learning |
Recent advances in estimating black hole masses for AGN show that radio luminosity is dependent on black hole mass and accretion rate. In this paper we outline a possible scheme for unifying radio-quiet and radio-loud AGN. We take the ``optimistic'' view that the mass and spin of the central black hole, the accretion rate onto it, plus orientation and a weak environmental dependence, fully determine the observed properties of AGN. | The production mechanism of radio jets in AGN and quasar grand unification |
Let X be a normal projective variety admitting an action of a semisimple group with a unique closed orbit. We construct finitely many rational curves in X, all having a common point, such that every effective one-cycle on X is rationally equivalent to a unique linear combination of these curves with non-negative rational coefficients. When X is nonsingular, these curves are projective lines, and they generate the integral Chow group of one-cycles. | The cone of effective one-cycles of certain G-varieties |
Concentration inequalities quantify the deviation of a random variable from a fixed value. In spite of numerous applications, such as opinion surveys or ecological counting procedures, few concentration results are known for the setting of sampling without replacement from a finite population. Until now, the best general concentration inequality has been a Hoeffding inequality due to Serfling [Ann. Statist. 2 (1974) 39-48]. In this paper, we first improve on the fundamental result of Serfling [Ann. Statist. 2 (1974) 39-48], and further extend it to obtain a Bernstein concentration bound for sampling without replacement. We then derive an empirical version of our bound that does not require the variance to be known to the user. | Concentration inequalities for sampling without replacement |
Non-parallel many-to-many voice conversion remains an interesting but challenging speech processing task. Recently, AutoVC, a conditional autoencoder based method, achieved excellent conversion results by disentangling the speaker identity and the speech content using information-constraining bottlenecks. However, due to the pure autoencoder training method, it is difficult to evaluate the separation effect of content and speaker identity. In this paper, a novel voice conversion framework, named $\boldsymbol T$ext $\boldsymbol G$uided $\boldsymbol A$utoVC(TGAVC), is proposed to more effectively separate content and timbre from speech, where an expected content embedding produced based on the text transcriptions is designed to guide the extraction of voice content. In addition, the adversarial training is applied to eliminate the speaker identity information in the estimated content embedding extracted from speech. Under the guidance of the expected content embedding and the adversarial training, the content encoder is trained to extract speaker-independent content embedding from speech. Experiments on AIShell-3 dataset show that the proposed model outperforms AutoVC in terms of naturalness and similarity of converted speech. | TGAVC: Improving Autoencoder Voice Conversion with Text-Guided and Adversarial Training |
We study the resilience of complex networks against attacks in which nodes are targeted intelligently, but where disabling a node has a cost to the attacker which depends on its degree. Attackers have to meet these costs with limited resources, which constrains their actions. A network's integrity is quantified in terms of the efficacy of the process that it supports. We calculate how the optimal attack strategy and the most attack-resistant network degree statistics depend on the node removal cost function and the attack resources. The resilience of networks against intelligent attacks is found to depend strongly on the node removal cost function faced by the attacker. In particular, if node removal costs increase sufficiently fast with the node degree, power law networks are found to be more resilient than Poissonian ones, even against optimized intelligent attacks. | Network resilience against intelligent attacks constrained by degree dependent node removal cost |
The primate visual system achieves remarkable visual object recognition performance even in brief presentations and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations such as the amount of noise, the number of neural recording sites, and the number trials, and computational limitations such as the complexity of the decoding classifier and the number of classifier training examples. In this work we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of "kernel analysis" that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds. | Deep Neural Networks Rival the Representation of Primate IT Cortex for Core Visual Object Recognition |
The field of surgical computer vision has undergone considerable breakthroughs in recent years with the rising popularity of deep neural network-based methods. However, standard fully-supervised approaches for training such models require vast amounts of annotated data, imposing a prohibitively high cost; especially in the clinical domain. Self-Supervised Learning (SSL) methods, which have begun to gain traction in the general computer vision community, represent a potential solution to these annotation costs, allowing to learn useful representations from only unlabeled data. Still, the effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored. In this work, we address this critical need by investigating four state-of-the-art SSL methods (MoCo v2, SimCLR, DINO, SwAV) in the context of surgical computer vision. We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection. We examine their parameterization, then their behavior with respect to training data quantities in semi-supervised settings. Correct transfer of these methods to surgery, as described and conducted in this work, leads to substantial performance gains over generic uses of SSL - up to 7.4% on phase recognition and 20% on tool presence detection - as well as state-of-the-art semi-supervised phase recognition approaches by up to 14%. Further results obtained on a highly diverse selection of surgical datasets exhibit strong generalization properties. The code is available at https://github.com/CAMMA-public/SelfSupSurg. | Dissecting Self-Supervised Learning Methods for Surgical Computer Vision |
Large scale numerical simulations of the Gross-Pitaevskii equation are used to elucidate the self-evolution of a Bose gas from a strongly non-equilibrium initial state. The stages of the process confirm and refine the theoretical scenario of Bose-Einstein condensation developed by Svistunov, Kagan, and Shlyapnikov [1-3]: the system evolves from the regime of weak turbulence to superfluid turbulence, via states of strong turbulence in the long-wavelength region of energy space. | Scenario of strongly non-equilibrium Bose-Einstein condensation |
The cosmic baryonic fluid at low redshifts is similar to a fully developed turbulence. In this work, we use simulation samples produced by the hybrid cosmological hydrodynamical/N-body code, to investigate on what scale the deviation of spatial distributions between baryons and dark matter is caused by turbulence. For this purpose, we do not include the physical processes such as star formation, supernovae (SNe) and active galactic nucleus (AGN) feedback into our code, so that the effect of turbulence heating for IGM can be exhibited to the most extent. By computing cross-correlation functions $r_m(k)$ for the density field and $r_v(k)$ for the velocity field of both baryons and dark matter, we find that deviations between the two matter components for both density field and velocity field, as expected, are scale-dependent. That is, the deviations are the most significant at small scales and gradually diminish on larger and larger scales. Also, the deviations are time-dependent, i.e. they become larger and larger with increasing cosmic time. The most emphasized result is that the spatial deviations between baryons and dark matter revealed by velocity field are more significant than that by density field. At z = 0, at the 1% level of deviation, the deviation scale is about 3.7 $h^{-1}$Mpc for density field, while as large as 23 $h^{-1}$Mpc for velocity field, a scale that falls within the weakly non-linear regime for the structure formation paradigm. Our results indicate that the effect of turbulence heating is indeed comparable to that of these processes such as SN and AGN feedback. | Turbulence-induced deviation between baryonic field and dark matter field in the spatial distribution of the Universe |
As mobile robots become useful performing everyday tasks in complex real-world environments, they must be able to traverse a range of difficult terrain types such as stairs, stepping stones, gaps, jumps and narrow passages. This work investigated traversing these types of environments with a bipedal robot (simulation experiments), and a tracked robot (real world). Developing a traditional monolithic controller for traversing all terrain types is challenging, and for large physical robots realistic test facilities are required and safety must be ensured. An alternative is a suite of simple behaviour controllers that can be composed to achieve complex tasks. This work efficiently trained complex behaviours to enable mobile robots to traverse difficult terrain. By minimising retraining as new behaviours became available, robots were able to traverse increasingly complex terrain sets, leading toward the development of scalable behaviour libraries. | Learning Visuo-Motor Behaviours for Robot Locomotion Over Difficult Terrain |
This paper studies asynchronous Federated Learning (FL) subject to clients' individual arbitrary communication patterns with the parameter server. We propose FedMobile, a new asynchronous FL algorithm that exploits the mobility attribute of the mobile FL system to improve the learning performance. The key idea is to leverage the random client-to-client communication in a mobile network to create additional indirect communication opportunities with the server via upload and download relaying. We prove that FedMobile achieves a convergence rate $O(\frac{1}{\sqrt{NT}})$, where $N$ is the number of clients and $T$ is the number of communication slots, and show that the optimal design involves an interesting trade-off on the best timing of relaying. Our analysis suggests that with an increased level of mobility, asynchronous FL converges faster using FedMobile. Experiment results on a synthetic dataset and two real-world datasets verify our theoretical findings. | Mobility Improves the Convergence of Asynchronous Federated Learning |
Weak gravitational lensing by the large scale structure can be used to probe the dark matter distribution in the Universe directly and thus to probe cosmological models. The recent detection of cosmic shear by several groups has demonstrated the feasibility of this new mode of observational cosmology. In the currently most extensive analysis of cosmic shear, it was found that the shear field contains unexpected modes, so-called B-modes, which are thought to be unaccountable for by lensing. B-modes can in principle be generated by an intrinsic alignment of galaxies from which the shear is measured, or may signify some remaining systematics in the data reduction and analysis. In this paper we show that B-modes in fact {\it are produced} by lensing itself. The effect comes about through the clustering of source galaxies, which in particular implies an angular separation-dependent clustering in redshift. After presenting the theory of the decomposition of a general shear field into E- and B-modes, we calculate their respective power spectra and correlation functions for a clustered source distribution. Numerical and analytical estimates of the relative strength of these two modes show that the resulting B-mode is very small on angular scales larger than a few arcminutes, but its relative contribution rises quickly towards smaller angular scales, with comparable power in both modes at a few arcseconds. The relevance of this effect with regard to the current cosmic shear surveys is discussed. | B-modes in cosmic shear from source redshift clustering |
We present constraints on the variability and binarity of young stars in the central 10 arcseconds (~0.4 pc) of the Milky Way Galactic Center (GC) using Keck Adaptive Optics data over a 12 year baseline. Given our experiment's photometric uncertainties, at least 36% of our sample's known early-type stars are variable. We identified eclipsing binary systems by searching for periodic variability. In our sample of spectroscopically confirmed and likely early-type stars, we detected the two previously discovered GC eclipsing binary systems. We derived the likely binary fraction of main sequence, early-type stars at the GC via Monte Carlo simulations of eclipsing binary systems, and find that it is at least 32% with 90% confidence. | Constraining the Variability and Binary Fraction of Galactic Center Young Stars |
Web Real-Time Communication (WebRTC) is a new standard and industry effort that extends the web browsing model. For the first time, browsers are able to directly exchange real-time media with other browsers in a peer-to-peer fashion. Before WebRTC was introduced, it was cumbersome to build smooth chat and video applications, users often experience unstable connections, blurry videos, and unclear sounds. WebRTC's peer-to-peer communication paradigm establishes the real-time connection between browsers using the SIP(Session Initiation Protocol) Trapezoid. A wide set of protocols are bundled in WebRTC API, such as connection management, encoding/decoding negotiation, media control, selection and control, firewall and NAT element traversal, etc. However, almost all current WebRTC applications are using centralized signaling infrastructure which brings the problems of scalability, stability, and fault-tolerance. In this paper, I am presenting a decentralized architecture by introducing the Kademlia network into WebRTC to reduce the need for a centralized signaling service for WebRTC. | Decentralized WebRCT P2P network using Kademlia |
We study the glassy transition for simple liquids in the hypernetted chain (HNC) approximation by means of an effective potential recently introduced. Integrating the HNC equations for hard spheres, we find a transition scenario analogous to that of the long range disordered systems with ``one step replica symmetry breaking''. Our result agree qualitatively with Monte Carlo simulations of three dimensional hard spheres. | Glass transition and effective potential in the hypernetted chain approximation |
Paramagnetic alignment of thermally rotating oblate dust grains is studied analytically for finite ratios of grain to gas temperatures. For such ratios, the alignment of angular momentum J in respect to the grain axis of maximal inertia is only partial. We treat the alignment of J using perturbative methods and disentangle the problem of J alignment in grain body axes from that of J alignment in respect to magnetic field. This enables us to find the alignment of grain axes to magnetic field and thus relate our theory to polarimetric observations.Our present results are applicable to the alignment of both paramagnetic and superparamagnetic grains. | Paramagnetic alignment of thermally rotating dust |
The fast, dense winds which characterize Wolf-Rayet (WR) stars obscure their underlying cores, and complicate the verification of evolving core and nucleosynthesis models. Core evolution can be probed by measuring abundances of wind-borne nuclear processed elements, partially overcoming this limitation. Using ground-based mid-infrared spectroscopy and the 12.81um [NeII] emission line measured in four Galactic WR stars, we estimate neon abundances and compare to long-standing predictions from evolved-core models. For the WC star WR121, this abundance is found to be >~11x the cosmic value, in good agreement with predictions. For the three less-evolved WN stars, little neon enhancement above cosmic values is measured, as expected. We discuss the impact of clumping in WR winds on this measurement, and the promise of using metal abundance ratios to eliminate sensitivity to wind density and ionization structure. | The Neon Abundance of Galactic Wolf-Rayet Stars |
The friends-of-friends algorithm (hereafter, FOF) is a percolation algorithm which is routinely used to identify dark matter halos from N-body simulations. We use results from percolation theory to show that the boundary of FOF halos does not correspond to a single density threshold but to a range of densities close to a critical value that depends upon the linking length parameter, b. We show that for the commonly used choice of b = 0.2, this critical density is equal to 81.62 times the mean matter density. Consequently, halos identified by the FOF algorithm enclose an average overdensity which depends on their density profile (concentration) and therefore changes with halo mass contrary to the popular belief that the average overdensity is ~180. We derive an analytical expression for the overdensity as a function of the linking length parameter b and the concentration of the halo. Results of tests carried out using simulated and actual FOF halos identified in cosmological simulations show excellent agreement with our analytical prediction. We also find that the mass of the halo that the FOF algorithm selects crucially depends upon mass resolution. We find a percolation theory motivated formula that is able to accurately correct for the dependence on number of particles for the mock realizations of spherical and triaxial Navarro-Frenk-White halos. However, we show that this correction breaks down when applied to the real cosmological FOF halos due to presence of substructures. Given that abundance of substructure depends on redshift and cosmology, we expect that the resolution effects due to substructure on the FOF mass and halo mass function will also depend on redshift and cosmology and will be difficult to correct for in general. Finally, we discuss the implications of our results for the universality of the mass function. | The overdensity and masses of the friends-of-friends halos and universality of the halo mass function |
Non-uniform structures of nuclear matter are studied in a wide density-range. Using the density functional theory with a relativistic mean-field model, we examine non-uniform structures at sub-nuclear densities (nuclear ``pastas'') and at high densities, where kaon condensate is expected. We try to give a unified view about the change of the matter structure as density increases, carefully taking into account the Coulomb screening effects from the viewpoint of first-order phase transition. | Kaon Condensation and the Non-Uniform Nuclear Matter |
We compute the gluino lifetime and branching ratios in Split Supersymmetry. Using an effective-theory approach, we resum the large logarithmic corrections controlled by the strong gauge coupling and the top Yukawa coupling. We find that the resummation of the radiative corrections has a sizeable numerical impact on the gluino decay width and branching ratios. Finally, we discuss the gluino decays into gravitino, relevant in models with direct mediation of supersymmetry breaking. | Gluino Decays in Split Supersymmetry |
Precise estimates for the Bergman distances of Dini-smooth bounded planar domains are given. These estimates imply that on such domains the Bergman distance almost coincides with the Carath\'eodory and Kobayashi distances. | Estimates of the Bergman distance on Dini-smooth bounded planar domains |
We define Getzler's Gauss-Manin connection in cyclic homology at the level of chains and outline some relations of this construction to noncommutative calculus. | On the Gauss-Manin connection in cyclic homology |
Machine-learning techniques have become fundamental in high-energy physics and, for new physics searches, it is crucial to know their performance in terms of experimental sensitivity, understood as the statistical significance of the signal-plus-background hypothesis over the background-only one. We present here a simple method that combines the power of current machine-learning techniques to face high-dimensional data with the likelihood-based inference tests used in traditional analyses, which allows us to estimate the sensitivity for both discovery and exclusion limits through a single parameter of interest, the signal strength. Based on supervised learning techniques, it can perform well also with high-dimensional data, when traditional techniques cannot. We apply the method to a toy model first, so we can explore its potential, and then to a LHC study of new physics particles in dijet final states. Considering as the optimal statistical significance the one we would obtain if the true generative functions were known, we show that our method provides a better approximation than the usual naive counting experimental results. | A method for approximating optimal statistical significances with machine-learned likelihoods |
We construct and analyze thermal spinning giant gravitons in type II/M-theory based on spherically wrapped black branes, using the method of thermal probe branes originating from the blackfold approach. These solutions generalize in different directions recent work in which the case of thermal (non-spinning) D3-brane giant gravitons was considered, and reveal a rich phase structure with various new properties. First of all, we extend the construction to M-theory, by constructing thermal giant graviton solutions using spherically wrapped M2- and M5-branes. More importantly, we switch on new quantum numbers, namely internal spins on the sphere, which are not present in the usual extremal limit for which the brane world volume stress tensor is Lorentz invariant. We examine the effect of this new type of excitation and in particular analyze the physical quantities in various regimes, including that of small temperatures as well as low/high spin. As a byproduct we find new stationary dipole-charged black hole solutions in AdS_m X S^n backgrounds of type II/M-theory. We finally show, via a double scaling extremal limit, that our spinning thermal giant graviton solutions lead to a novel null-wave zero-temperature giant graviton solution with a BPS spectrum, which does not have an analogue in terms of the conventional weakly coupled world volume theory. | Null-Wave Giant Gravitons from Thermal Spinning Brane Probes |
An $(n,d,\lambda)$-graph is a $d$ regular graph on $n$ vertices in which the absolute value of any nontrivial eigenvalue is at most $\lambda$. For any constant $d \geq 3$, $\epsilon>0$ and all sufficiently large $n$ we show that there is a deterministic poly(n) time algorithm that outputs an $(n,d, \lambda)$-graph (on exactly $n$ vertices) with $\lambda \leq 2 \sqrt{d-1}+\epsilon$. For any $d=p+2$ with $p \equiv 1 \bmod 4$ prime and all sufficiently large $n$, we describe a strongly explicit construction of an $(n,d, \lambda)$-graph (on exactly $n$ vertices) with $\lambda \leq \sqrt {2(d-1)} + \sqrt{d-2} +o(1) (< (1+\sqrt 2) \sqrt {d-1}+o(1))$, with the $o(1)$ term tending to $0$ as $n$ tends to infinity. For every $\epsilon >0$, $d>d_0(\epsilon)$ and $n>n_0(d,\epsilon)$ we present a strongly explicit construction of an $(m,d,\lambda)$-graph with $\lambda < (2+\epsilon) \sqrt d$ and $m=n+o(n)$. All constructions are obtained by starting with known ones of Ramanujan or nearly Ramanujan graphs, modifying or packing them in an appropriate way. The spectral analysis relies on the delocalization of eigenvectors of regular graphs in cycle-free neighborhoods. | Explicit expanders of every degree and size |
Matching has become the mainstream in counterfactual inference, with which selection bias between sample groups can be significantly eliminated. However in practice, when estimating average treatment effect on the treated (ATT) via matching, no matter which method, the trade-off between estimation accuracy and information loss constantly exist. Attempting to completely replace the matching process, this paper proposes the GAN-ATT estimator that integrates generative adversarial network (GAN) into counterfactual inference framework. Through GAN machine learning, the probability density functions (PDFs) of samples in both treatment group and control group can be approximated. By differentiating conditional PDFs of the two groups with identical input condition, the conditional average treatment effect (CATE) can be estimated, and the ensemble average of corresponding CATEs over all treatment group samples is the estimate of ATT. Utilizing GAN-based infinite sample augmentations, problems in the case of insufficient samples or lack of common support domains can be easily solved. Theoretically, when GAN could perfectly learn the PDFs, our estimators can provide exact estimate of ATT. To check the performance of the GAN-ATT estimator, three sets of data are used for ATT estimations: Two toy data sets with 1/2 dimensional covariate inputs and constant/covariate-dependent treatment effect are tested. The estimates of GAN-ATT are proved close to the ground truth and are better than traditional matching approaches; A real firm-level data set with high-dimensional input is tested and the applicability towards real data sets is evaluated by comparing matching approaches. Through the evidences obtained from the three tests, we believe that the GAN-ATT estimator has significant advantages over traditional matching methods in estimating ATT. | A Constructive GAN-based Approach to Exact Estimate Treatment Effect without Matching |
Connecting ideas of geometric formulation of quantum mechanics with new results in symplectic geometry a new approach to geometrical quantization procedure is proposed. As a first result we verify that the correspondence between "classical" Poisson bracket and "quantum" one takes place. | Hamiltonian dynamics on the moduli space of half weighted Bohr - Sommerfeld Lagrangian subcycles of a fixed volume |
We consider the Bilevel Knapsack with Interdiction Constraints, an extension of the classic 0-1 knapsack problem formulated as a Stackelberg game with two agents, a leader and a follower, that choose items from a common set and hold their own private knapsacks. First, the leader selects some items to be interdicted for the follower while satisfying a capacity constraint. Then the follower packs a set of the remaining items according to his knapsack constraint in order to maximize the profits. The goal of the leader is to minimize the follower's profits. The presence of two decision levels makes this problem very difficult to solve in practice: the current state-of-the-art algorithms can solve to optimality instances with 50-55 items at most. We derive effective lower bounds and present a new exact approach that exploits the structure of the induced follower's problem. The approach successfully solves all benchmark instances within one second in the worst case and larger instances with up to 500 items within 60 seconds. | A new exact approach for the Bilevel Knapsack with Interdiction Constraints |
Let $f$ be a zero-mean continuous stationary Gaussian process on ${\mathbb R}$ whose spectral measure vanishes in a $\delta$-neighborhood of the origin. Then the probability that $f$ stays non-negative on an interval of length $L$ is at most $e^{-c\delta^2 L^2}$ with some absolute $c>0$ and the result is sharp without additional assumptions. | On the probability that a stationary Gaussian process with spectral gap remains non-negative on a long interval |
Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse. We introduce an approach to iSRL based on a predictive recurrent neural semantic frame model (PRNSFM) that uses a large unannotated corpus to learn the probability of a sequence of semantic arguments given a predicate. We leverage the sequence probabilities predicted by the PRNSFM to estimate selectional preferences for predicates and their arguments. On the NomBank iSRL test set, our approach improves state-of-the-art performance on implicit semantic role labeling with less reliance than prior work on manually constructed language resources. | Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments |
A stochastic configuration interaction method based on evolutionary algorithm is designed as an affordable approximation to full configuration interaction (FCI). The algorithm comprises of initiation, propagation and termination steps, where the propagation step is performed with cloning, mutation and cross-over, taking inspiration from genetic algorithm. We have tested its accuracy in 1D Hubbard problem and a molecular system (symmetric bond breaking of water molecule). We have tested two different fitness functions based on energy of the determinants and the CI coefficients of determinants. We find that the absolute value of CI coefficients is a more suitable fitness function when combined with a fixed selection scheme. | Evolutionary algorithm based configuration interaction approach |
In recent years the number of CubeSats (U-class spacecrafts) launched into space has increased exponentially marking the dawn of the nanosatellite technology. In general these satellites have a much smaller mass budget compared to conventional scientific satellites which limits shielding of scientific instruments against direct and indirect radiation in space. In this paper we present a simulation framework to quantify the signal in large field-of-view gamma-ray scintillation detectors of satellites induced by X-ray/gamma-ray transients, by taking into account the response of the detector. Furthermore, we quantify the signal induced by X-ray and particle background sources at a Low-Earth Orbit outside South Atlantic Anomaly and polar regions. Finally, we calculate the signal-to-noise ratio taking into account different energy threshold levels. Our simulation can be used to optimize material composition and predict detectability of various astrophysical sources by CubeSats. We apply the developed simulation to a satellite belonging to a planned CAMELOT CubeSat constellation. This project mainly aims to detect short and long gamma-ray bursts (GRBs) and as a secondary science objective, to detect soft gamma-ray repeaters (SGRs) and terrestrial gamma-ray flashes (TGFs). The simulation includes a detailed computer-aided design (CAD) model of the satellite to take into account the interaction of particles with the material of the satellite as accurately as possible. Results of our simulations predict that CubeSats can complement the large space observatories in high-energy astrophysics for observations of GRBs, SGRs and TGFs. For the detectors planned to be on board of the CAMELOT CubeSats the simulations show that detections with signal-to-noise ratio of at least 9 for median GRB and SGR fluxes are achievable. | Simulations of expected signal and background of gamma-ray sources by large field-of-view detectors aboard CubeSats |
In this paper, we propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection by exploiting multi-label object recognition as a dual auxiliary task. The model exploits multi-label prediction to reveal the object category information in each image and then uses the prediction results to perform conditional adversarial global feature alignment, such that the multi-modal structure of image features can be tackled to bridge the domain divergence at the global feature level while preserving the discriminability of the features. Moreover, we introduce a prediction consistency regularization mechanism to assist object detection, which uses the multi-label prediction results as an auxiliary regularization information to ensure consistent object category discoveries between the object recognition task and the object detection task. Experiments are conducted on a few benchmark datasets and the results show the proposed model outperforms the state-of-the-art comparison methods. | Adaptive Object Detection with Dual Multi-Label Prediction |
While object detection methods traditionally make use of pixel-level masks or bounding boxes, alternative representations such as polygons or active contours have recently emerged. Among them, methods based on the regression of Fourier or Chebyshev coefficients have shown high potential on freeform objects. By defining object shapes as polar functions, they are however limited to star-shaped domains. We address this issue with SCR: a method that captures resolution-free object contours as complex periodic functions. The method offers a good compromise between accuracy and compactness thanks to the design of efficient geometric shape priors. We benchmark SCR on the popular COCO 2017 instance segmentation dataset, and show its competitiveness against existing algorithms in the field. In addition, we design a compact version of our network, which we benchmark on embedded hardware with a wide range of power targets, achieving up to real-time performance. | SCR: Smooth Contour Regression with Geometric Priors |
Terahertz near fields of gold metamaterials resonant at a frequency of $0.88\,\rm THz$ allow us to enter an extreme limit of non-perturbative ultrafast THz electronics: Fields reaching a ponderomotive energy in the keV range are exploited to drive nondestructive, quasi-static interband tunneling and impact ionization in undoped bulk GaAs, injecting electron-hole plasmas with densities in excess of $10^{19}\,\rm cm^{-3}$. This process causes bright luminescence at energies up to $0.5\,\rm eV$ above the band gap and induces a complete switch-off of the metamaterial resonance accompanied by self-amplitude modulation of transmitted few-cycle THz transients. Our results pave the way towards highly nonlinear THz optics and optoelectronic nanocircuitry with sub-picosecond switching times. | Extremely Nonperturbative Nonlinearities in GaAs Driven by Atomically Strong Terahertz Fields in Gold Metamaterials |
CuAl2O4 is a normal spinel oxide having quantum spin, S=1/2 for Cu2+. It is a rather unique feature that the Cu2+ ions of CuAl2O4 sit at a tetrahedral position, not like the usual octahedral position for many oxides. At low temperatures, it exhibits all the thermodynamic evidence of a quantum spin glass. For example, the polycrystalline CuAl2O4 shows a cusp centered at ~2 K in the low-field dc magnetization data and a clear frequency dependence in the ac magnetic susceptibility while it displays logarithmic relaxation behavior in a time dependence of the magnetization. At the same time, there is a peak at ~2.3 K in the heat capacity, which shifts towards higher temperature with magnetic fields. On the other hand, there is no evidence of new superlattice peaks in the high-resolution neutron powder diffraction data when cooled from 40 to 0.4 K. This implies that there is no long-ranged magnetic order down to 0.4 K, thus confirming a spin glass-like ground state for CuAl2O4. Interestingly, there is no sign of structural distortion either although Cu2+ is a Jahn-Teller active ion. Thus, we claim that an orbital liquid state is the most likely ground state in CuAl2O4. Of further interest, it also exhibits a large frustration parameter, f = Theta_CW/Tm ~67, one of the largest values reported for spinel oxides. Our observations suggest that CuAl2O4 should be a rare example of a frustrated quantum spin glass with a good candidate for an orbital liquid state. | Spin glass behavior in frustrated quantum spin system CuAl2O4 with a possible orbital liquid state |
The problem of simulating the interaction of spacecraft travelling at velocities necessary for starflight with the interplanetary and interstellar medium is considered. Interaction of protons, atoms, and ions at kinetic energies relative to the spacecraft (MeV per nucleon) is essentially a problem of sputtering. More problematic is the impact of dust grains, macroscopic objects on the order of 10 nm ($10^{-21}$ kg) to 1 $\mu$m ($10^{-15}$ kg), the effects of which are difficult to calculate, and thus experiments are needed. The maximum velocity of dust grains that can be achieved at present in the laboratory using electrostatic methods is approximately 100 km/s, two orders of magnitude below starflight velocities. The attainment of greater velocities has been previously considered in connection with the concept of impact fusion and was concluded to be technologically very challenging. The reasons for this are explained in terms of field emission, which limits the charge-to-mass ratio on the macroscopic particle being accelerated as well as the voltage potential gradient of the accelerating electrostatic field, resulting in the accelerator needing to be hundreds to thousands of kilometers long for $\mu$m-sized grains. Use of circular accelerators (cyclotrons and synchrotrons) is not practical due to limitations on magnetic field strength making the accelerator thousands of kilometers in size for $\mu$m-sized grains. Electromagnetic launchers (railguns, coilguns) have not been able to produce velocities greater than conventional gas guns (< 10 km/s). The nearest feasible technologies (tandem accelerators, macromolecular accelerators) to reach the regime of projectile mass and velocity of interest are reviewed. Pulsed lasers are found to be the only facilities able to accelerate condensed phase matter to velocities approaching 1000 km/s but unlikely to be able to reach greater speeds. | Experimental Simulation of Dust Impacts at Starflight Velocities |
The explicit thermodynamic functions, in particular, the specific heat of a spin system interacting with a spin bath which exerts finite dissipation on the system are determined. We show that the specific heat is a sum of the products of a thermal equilibration factor that carries the temperature dependence and a dynamical correction factor, characteristic of the dissipative energy flow under steady state from the system. The variation of specific heat with temperature is accompanied by an abrupt transition that depends on these dynamical factors characteristic of the finite system size. | Fluctuation corrections on thermodynamic functions: Finite size effect |
We consider infinite staircase translation surfaces with varying step sizes. For typical step sizes we show that the translation flow is uniquely ergodic in almost every direction. Our result also hold for typical configurations of the Ehrenfest wind-tree model endowed with the Hausdorff topology. In contrast, we show that the translation flow on a periodic translation surface can not be uniquely ergodic in any direction. | Unique Ergodicity For Infinite Area Translation Surfaces |
In this letter, we consider a multiple-input multiple-output two-way cognitive radio system under a spectrum sharing scenario, where primary and secondary users operate on the same frequency band. The secondary terminals aims to exchange different messages with each other using multiple relays where each relay employs an amplify-and-forward strategy. The main objective of our work is to maximize the secondary sum rate allowed to share the spectrum with the primary users by respecting a primary user tolerated interference threshold. In this context, we derive a closed-form expression of the optimal power allocated to each antenna of the terminals. We then discuss the impact of some system parameters on the performance in the numerical result section. | Optimal Transmit Power Allocation for MIMO Two-Way Cognitive Relay Networks with Multiple Relays |
Completely Liouville integrable Hamiltonian system with two degrees of freedom, describing the dynamics of two vortex filaments in a Bose-Einstein condensate enclosed in a cylindrical trap, is considered. For the system of two vortices with identical intensities detected bifurcation of three Liouville tori into one. Such a bifurcation was found in the integrable case of Goryachev-Chaplygin-Sretensky in the rigid body dynamics. | Phase Topology of Two Vortices of the Identical Intensities in Bose-Einstein Condensate |
We describe advances on a method designed to derive accurate parameters of M dwarfs. Our analysis consists in comparing high-resolution infrared spectra acquired with the near-infrared spectro-polarimeter SPIRou to synthetic spectra computed from MARCS model atmospheres, in order to derive the effective temperature ($T_{\rm eff}$), surface gravity ($\rm \log{g}$), metallicity ([M/H]) and alpha-enhancement ($\rm [\alpha/Fe]$) of 44 M dwarfs monitored within the SPIRou Legacy Survey (SLS). Relying on 12 of these stars, we calibrated our method by refining our selection of well modelled stellar lines, and adjusted the line list parameters to improve the fit when necessary. Our retrieved $T_{\rm eff}$, $\rm \log{g}$ and [M/H] are in good agreement with literature values, with dispersions of the order of 50 K in $T_{\rm eff}$ and 0.1 dex in $\rm \log{g}$ and [M/H]. We report that fitting $\rm [\alpha/Fe]$ has an impact on the derivation of the other stellar parameters, motivating us to extend our fitting procedure to this additional parameter. We find that our retrieved $\rm [\alpha/Fe]$ are compatible with those expected from empirical relations derived in other studies. | Estimating the atmospheric properties of 44 M dwarfs from SPIRou spectra |
In this paper we consider the diphoton production in hadronic collisions at the next-to-next-to-leading order (NNLO) in perturbative QCD, taking into account for the first time the full top quark mass dependence up to two loops (full NNLO). We show selected numerical distributions, highlighting the kinematic regions where the massive corrections are more significant. We make use of the recently computed two-loop massive amplitudes for diphoton production in the quark annihilation channel. The remaining massive contributions at NNLO are also considered, and we comment on the weight of the different types of contributions to the full and complete result. | Full top-quark mass dependence in diphoton production at NNLO in QCD |
Minimizing prediction uncertainty on unlabeled data is a key factor to achieve good performance in semi-supervised learning (SSL). The prediction uncertainty is typically expressed as the \emph{entropy} computed by the transformed probabilities in output space. Most existing works distill low-entropy prediction by either accepting the determining class (with the largest probability) as the true label or suppressing subtle predictions (with the smaller probabilities). Unarguably, these distillation strategies are usually heuristic and less informative for model training. From this discernment, this paper proposes a dual mechanism, named ADaptive Sharpening (\ADS), which first applies a soft-threshold to adaptively mask out determinate and negligible predictions, and then seamlessly sharpens the informed predictions, distilling certain predictions with the informed ones only. More importantly, we theoretically analyze the traits of \ADS by comparing with various distillation strategies. Numerous experiments verify that \ADS significantly improves the state-of-the-art SSL methods by making it a plug-in. Our proposed \ADS forges a cornerstone for future distillation-based SSL research. | Taming Overconfident Prediction on Unlabeled Data from Hindsight |
We present the potential for discovering the Standard Model Higgs boson produced by the vector-boson fusion mechanism. We considered the decay of Higgs bosons into the W+W- final state, with both W-bosons subsequently decaying leptonically. The main background is ttbar with one or more jets produced. This study is based on a full simulation of the CMS detector, and up-to-date reconstruction codes. The result is that a signal of 5 sigma significance can be obtained with an integrated luminosity of 12-72 1/fb for Higgs boson masses between 130-200 GeV. In addition, the major background can be measured directly to 7% from the data with an integrated luminosity of 30 1/fb. In this study, we also suggested a method to obtain information in Higgs mass using the transverse mass distributions. | Search for a Standard Model Higgs Boson in CMS via Vector Boson Fusion in the H->WW->l\nu l\nu Channel |
Microblog classification has received a lot of attention in recent years. Different classification tasks have been investigated, most of them focusing on classifying microblogs into a small number of classes (five or less) using a training set of manually annotated tweets. Unfortunately, labelling data is tedious and expensive, and finding tweets that cover all the classes of interest is not always straightforward, especially when some of the classes do not frequently arise in practice. In this paper we study an approach to tweet classification based on distant supervision, whereby we automatically transfer labels from one social medium to another for a single-label multi-class classification task. In particular, we apply YouTube video classes to tweets linking to these videos. This provides for free a virtually unlimited number of labelled instances that can be used as training data. The classification experiments we have run show that training a tweet classifier via these automatically labelled data achieves substantially better performance than training the same classifier with a limited amount of manually labelled data; this is advantageous, given that the automatically labelled data come at no cost. Further investigation of our approach shows its robustness when applied with different numbers of classes and across different languages. | Bridging Social Media via Distant Supervision |
In arXiv:1008.1018 it is shown that a given stable vector bundle $V$ on a Calabi-Yau threefold $X$ which satisfies $c_2(X)=c_2(V)$ can be deformed to a solution of the Strominger system and the equations of motion of heterotic string theory. In this note we extend this result to the polystable case and construct explicit examples of polystable bundles on elliptically fibered Calabi-Yau threefolds where it applies. The polystable bundle is given by a spectral cover bundle, for the visible sector, and a suitably chosen bundle, for the hidden sector. This provides a new class of heterotic flux compactifications via non-Kahler deformation of Calabi-Yau geometries with polystable bundles. As an application, we obtain examples of non-Kahler deformations of some three generation GUT models. | Heterotic Non-Kahler Geometries via Polystable Bundles on Calabi-Yau Threefolds |
LDA+DMFT (Local Density Approximation combined with Dynamical Mean-Field Theory) computation scheme has been used to calculate spectral properties of LaFeAsO -- the parent compound of the new high-T_c iron oxypnictides. The average Coulomb repulsion U=3-4 eV and Hund's exchange J=0.8 eV parameters for iron 3d electrons were calculated using the first principles constrained density functional theory scheme in the Wannier functions formalism. DMFT calculations using these parameters result in moderately correlated electronic structure with effective electron mass enhancement m^*~2 that is in agreement with the experimental X-ray and photoemission spectra. Conclusion of moderate correlations strength is confirmed by the observation that pnictides experimental spectra agree well with corresponding spectra for metallic iron while being very different with Mott insulator FeO spectra. | Strength of correlations in pnictides and its assessment by theoretical calculations and spectroscopy experiments |
Bitcoin's success has led to significant interest in its underlying components, particularly Blockchain technology. Over 10 years after Bitcoin's initial release, the community still suffers from a lack of clarity regarding what properties defines Blockchain technology, its relationship to similar technologies, and which of its proposed use-cases are tenable and which are little more than hype. In this paper we answer four common questions regarding Blockchain technology: (1) what exactly is Blockchain technology, (2) what capabilities does it provide, and (3) what are good applications for Blockchain technology, and (4) how does it relate to other approache distributed technologies (e.g., distributed databases). We accomplish this goal by using grounded theory (a structured approach to gathering and analyzing qualitative data) to thoroughly analyze a large corpus of literature on Blockchain technology. This method enables us to answer the above questions while limiting researcher bias, separating thought leadership from peddled hype and identifying open research questions related to Blockchain technology. The audience for this paper is broad as it aims to help researchers in a variety of areas come to a better understanding of Blockchain technology and identify whether it may be of use in their own research. | SoK: Blockchain Technology and Its Potential Use Cases |
Conditions for geometric ergodicity of multivariate autoregressive conditional heteroskedasticity (ARCH) processes, with the so-called BEKK (Baba, Engle, Kraft, and Kroner) parametrization, are considered. We show for a class of BEKK-ARCH processes that the invariant distribution is regularly varying. In order to account for the possibility of different tail indices of the marginals, we consider the notion of vector scaling regular variation, in the spirit of Perfekt (1997, Advances in Applied Probability, 29, pp. 138-164). The characterization of the tail behavior of the processes is used for deriving the asymptotic properties of the sample covariance matrices. | On the tail behavior of a class of multivariate conditionally heteroskedastic processes |
We describe the earliest measurements of the DT fusion cross section commissioned by the Manhattan Project, first at Purdue University in 1943 and then at Los Alamos 1945-6 and later, in 1951-2. The Los Alamos measurements led to the realization that a 3/2$^+$ resonance in the DT system enhances the fusion cross section by a factor of one hundred at energies relevant to applications. This was a transformational discovery, making the quest for terrestrial fusion energy possible. The earliest measurements were reasonably accurate given the technology of the time and the scarcity of tritium, and were quickly improved to provide cross section data accurate to just a few percent. We provide a previously-unappreciated insight: that DT fusion was first reported in Ruhlig's 1938 University of Michigan experiment and likely influenced Konopinski in 1942 to suggest its usefulness for thermonuclear technologies. We report on preliminary work to repeat the 1938 measurement, and our simulations of that experiment. We also present some work by Fermi, from his 1945 Los Alamos lectures, showing that he used the S-factor concept about a decade before it was introduced by nuclear astrophysicists. | The earliest DT nuclear fusion discoveries |
Peer-To-Peer (P2P) networks are self-organizing, distributed systems, with no centralized authority or infrastructure. Because of the voluntary participation, the availability of resources in a P2P system can be highly variable and unpredictable. In this paper, we use ideas from Game Theory to study the interaction of strategic and rational peers, and propose a differential service-based incentive scheme to improve the system's performance. | A Game Theoretic Framework for Incentives in P2P Systems |
The DAMA/LIBRA observation of an annual modulation in the detection rate compatible with that expected for dark matter particles from the galactic halo has accumulated evidence for more than twenty years. It is the only hint of a direct detection of the elusive dark matter, but it is in strong tension with the negative results of other very sensitive experiments, requiring ad-hoc scenarios to reconcile all the present experimental results. Testing the DAMA/LIBRA result using the same target material, NaI(Tl), removes the dependence on the particle and halo models and is the goal of the ANAIS-112 experiment, taking data at the Canfranc Underground Laboratory in Spain since August 2017 with 112.5 kg of NaI(Tl). At very low energies, the detection rate is dominated by non-bulk scintillation events and careful event selection is mandatory. This article summarizes the efforts devoted to better characterize and filter this contribution in ANAIS-112 data using a boosted decision tree (BDT), trained for this goal with high efficiency. We report on the selection of the training populations, the procedure to determine the optimal cut on the BDT parameter, the estimate of the efficiencies for the selection of bulk scintillation in the region of interest (ROI), and the evaluation of the performance of this analysis with respect to the previous filtering. The improvement achieved in background rejection in the ROI, but moreover, the increase in detection efficiency, push the ANAIS-112 sensitivity to test the DAMA/LIBRA annual modulation result around 3$\sigma$ with three-year exposure, being possible to reach 5$\sigma$ by extending the data taking for a few more years than the scheduled 5 years which were due in August 2022. | Improving ANAIS-112 sensitivity to DAMA/LIBRA signal with machine learning techniques |
Modern cancer -omics and pharmacological data hold great promise in precision cancer medicine for developing individualized patient treatments. However, high heterogeneity and noise in such data pose challenges for predicting the response of cancer cell lines to therapeutic drugs accurately. As a result, arbitrary human judgment calls are rampant throughout the predictive modeling pipeline. In this work, we develop a transparent stability-driven pipeline for drug response interpretable predictions, or staDRIP, which builds upon the PCS framework for veridical data science (Yu and Kumbier, 2020) and mitigates the impact of human judgment calls. Here we use the PCS framework for the first time in cancer research to extract proteins and genes that are important in predicting the drug responses and stable across appropriate data and model perturbations. Out of the 24 most stable proteins we identified using data from the Cancer Cell Line Encyclopedia (CCLE), 18 have been associated with the drug response or identified as a known or possible drug target in previous literature, demonstrating the utility of our stability-driven pipeline for knowledge discovery in cancer drug response prediction modeling. | A stability-driven protocol for drug response interpretable prediction (staDRIP) |
The stochastic mutual repressor model is analysed using perturbation methods. This simple model of a gene circuit consists of two genes and three promotor states. Either of the two protein products can dimerize, forming a repressor molecule that binds to the promotor of the other gene. When the repressor is bound to a promotor, the corresponding gene is not transcribed and no protein is produced. Either one of the promotors can be repressed at any given time or both can be unrepressed, leaving three possible promotor states. This model is analysed in its bistable regime in which the deterministic limit exhibits two stable fixed points and an unstable saddle, and the case of small noise is considered. On small time scales, the stochastic process fluctuates near one of the stable fixed points, and on large time scales, a metastable transition can occur, where fluctuations drive the system past the unstable saddle to the other stable fixed point. To explore how different intrinsic noise sources affect these transitions, fluctuations in protein production and degradation are eliminated, leaving fluctuations in the promotor state as the only source of noise in the system. Perturbation methods are then used to compute the stability landscape and the distribution of transition times, or first exit time density. To understand how protein noise affects the system, small magnitude fluctuations are added back into the process, and the stability landscape is compared to that of the process without protein noise. It is found that significant differences in the random process emerge in the presence of protein noise. | Isolating intrinsic noise sources in a stochastic genetic switch |
Galileon radiation in the collapse of a thin spherical shell of matter is analyzed. In the framework of a cubic Galileon theory, we compute the field profile produced at large distances by a short collapse, finding that the radiated field has two peaks traveling ahead of light fronts. The total energy radiated during the collapse follows a power law scaling with the shell's physical width and results from two competing effects: a Vainshtein suppression of the emission and an enhancement due to the thinness of the shell. | Galileon Radiation from a Spherical Collapsing Shell |
Solutions with scaling-invariant bounds such as self-similar solutions, play an important role in the understanding of the regularity and asymptotic structures of solutions to the Navier-Stokes equations. In this paper, we prove that any steady solution satisfying $|\Bu(x)|\leq C/|x|$ for any constant $C$ in $\mathbb{R}^n\setminus \{0\}$ with $ n \geq 4$, must be zero. Our main idea is to analyze the velocity field and the total head pressure via weighted energy estimates with suitable multipliers so that the proof is pretty elementary and short. These results not only give the Liouville-type theorem for steady solutions in higher dimensions with neither smallness nor self-similarity type assumptions, but also help to remove a class of singularities of solutions and give the optimal asymptotic behaviors of solutions at infinity in the exterior domains. | Rigidity of Steady Solutions to the Navier-Stokes Equations in High Dimensions and its applications |
This paper presents a modeling study of the oxidation of cyclohexane from low to intermediate temperature (650-1050 K), including the negative temperature coefficient (NTC) zone. A detailed kinetic mechanism has been developed using computer-aided generation. This comprehensive low-temperature mechanism involves 513 species and 2446 reactions and includes two additions of cyclohexyl radicals to oxygen, as well as subsequent reactions. The rate constants of the reactions involving the formation of bicyclic species (isomerizations, formation of cyclic ethers) have been evaluated from literature data. This mechanism is able to satisfactorily reproduce experimental results obtained in a rapid-compression machine for temperatures ranging from 650 to 900 K and in a jet-stirred reactor from 750 to 1050 K. Flow-rate analyses have been performed at low and intermediate temperatures. | Modelling of the gas-phase oxidation of cyclohexane |
We consider the possibility that the horizon area is expressed by the general area spectrum in loop quantum gravity and calculate the black hole entropy by counting the degrees of freedom in spin-network states related to its area. Although the general area spectrum has a complex expression, we succeeded in obtaining the result that the black hole entropy is proportional to its area as in previous works where the simplified area formula has been used. This gives new values for the Barbero-Immirzi parameter ($\gamma =0.5802... \mathrm{or} 0.7847...$) which are larger than that of previous works. | Black hole entropy for the general area spectrum |
Most stars will experience episodes of substantial mass loss at some point in their lives. For very massive stars, mass loss dominates their evolution, although the mass loss rates are not known exactly, particularly once the star has left the main sequence. Direct observations of the stellar winds of massive stars can give information on the current mass-loss rates, while studies of the ring nebulae and HI shells that surround many Wolf-Rayet (WR) and luminous blue variable (LBV) stars provide information on the previous mass-loss history. The evolution of the most massive stars, (M > 25 solar masses), essentially follows the sequence O star to LBV or red supergiant (RSG) to WR star to supernova. For stars of mass less than 25 solar masses there is no final WR stage. During the main sequence and WR stages, the mass loss takes the form of highly supersonic stellar winds, which blow bubbles in the interstellar and circumstellar medium. In this way, the mechanical luminosity of the stellar wind is converted into kinetic energy of the swept-up ambient material, which is important for the dynamics of the interstellar medium. In this review article, analytic and numerical models are used to describe the hydrodynamics and energetics of wind-blown bubbles. A brief review of observations of bubbles is given, and the degree to which theory is supported by observations is discussed. | Wind-Blown Bubbles around Evolved Stars |
A soft-wall warped extra dimension allows one to relax the tight constraints imposed by electroweak data in conventional Randall-Sundrum models. We investigate a setup, where the lepton flavour structure of the Standard Model is realised by split fermion locations. Bulk fermions with general locations are not analytically tractable in a soft-wall background, so we follow a numerical approach to perform the Kaluza-Klein reduction. Lepton flavour violation is induced by the exchange of Kaluza-Klein gauge bosons. We find that rates for processes such as muon-electron conversion are significantly reduced compared to hard-wall models, allowing for a Kaluza-Klein scale as low as 2 TeV. Accommodating small neutrino masses forces one to introduce a large hierarchy of scales into the model, making pressing the question of a suitable stabilisation mechanism. | Suppressing Lepton Flavour Violation in a Soft-Wall Extra Dimension |
We study optimal synchronization of networks of coupled phase oscillators. We extend previous theory for optimizing the synchronization properties of undirected networks to the important case of directed networks. We derive a generalized synchrony alignment function that encodes the interplay between network structure and the oscillators' natural frequencies and serves as an objective measure for the network's degree of synchronization. Using the generalized synchrony alignment function, we show that a network's synchronization properties can be systematically optimized. This framework also allows us to study the properties of synchrony-optimized networks, and in particular, investigate the role of directed network properties such as nodal in- and out-degrees. For instance, we find that in optimally rewired networks the heterogeneity of the in-degree distribution roughly matches the heterogeneity of the natural frequency distribution, but no such relationship emerges for out-degrees. We also observe that a network's synchronization properties are promoted by a strong correlation between the nodal in-degrees and the natural frequencies of oscillators, whereas the relationship between the nodal out-degrees and the natural frequencies has comparatively little effect. This result is supported by our theory, which indicates that synchronization is promoted by a strong alignment of the natural frequencies with the left singular vectors corresponding to the largest singular values of the Laplacian matrix. | Optimal synchronization of directed complex networks |
We prove that the complex cobordism class of any hyper-K\"{a}hler manifold of dimension $2n$ is a unique combination with rational coefficients of classes of products of punctual Hilbert schemes of $K3$ surfaces. We also prove a similar result using the generalized Kummer varieties instead of punctual Hilbert schemes. As a key step, we establish a closed formula for the top Chern character of their tangent bundles. | Hilbert schemes of K3 surfaces, generalized Kummer, and cobordism classes of hyper-K\"ahler manifolds |
We define the $osp(1,2)$ Gaudin algebra and consider integrable models described by it. The models include the $osp(1,2)$ Gaudin magnet and the Dicke model related to it. Detailed discussion of the simplest cases of these models is presented. The effect of the presence of fermions on the separation of variables is indicated. | On Integrable Models Related to the $osp(1,2)$ Gaudin Algebra |
We construct an explicit regulator map from the weigh n Bloch Higher Chow group complexto the weight n Deligne complex of a regular complex projective algebraic variety X. We define the Arakelovweight n motivic complex as the cone of this map shifted by one. Its last cohomology group is (a version of) the Arakelov Chow group defined by H. Gillet. and C.Soule. We relate the Grassmannian n-logarithms (defined as in [G5]) to geometry of the symmetric space for GL_n(C). For n=2 we recover Lobachevsky's formula for the volume of an ideal geodesic tetrahedron via the dilogarithm. Using the relationship with symmetric spaces we construct the Borel regulator on K_{2n-1}(C) via the Grassmannian n-logarithms. We study the Chow dilogarithm and prove a reciprocity law which strengthens Suslin's reciprocity law for Milnor's K_3 on curves. | Polylogarithms, regulators and Arakelov motivic complexes |
In this paper we obtain new limit theorems for variational functionals of high frequency observations of stationary increments L\'evy driven moving averages. We will see that the asymptotic behaviour of such functionals heavily depends on the kernel, the driving L\'evy process and the properties of the functional under consideration. We show the "law of large numbers" for our class of statistics, which consists of three different cases. For one of the appearing limits, which we refer to as the ergodic type limit, we also prove the associated weak limit theory, which again consists of three different cases. Our work is related to [9,10], who considered power variation functionals of stationary increments L\'evy driven moving averages. | On limit theory for functionals of stationary increments Levy driven moving averages |
We show that a black-box construction of a pseudorandom generator from a one-way function needs to make Omega(n/log(n)) calls to the underlying one-way function. The bound even holds if the one-way function is guaranteed to be regular. In this case it matches the best known construction due to Goldreich, Krawczyk, and Luby (SIAM J. Comp. 22, 1993), which uses O(n/log(n)) calls. | Constructing a Pseudorandom Generator Requires an Almost Linear Number of Calls |
Modern civilian and military systems have created a demand for sophisticated intelligent autonomous machines capable of operating in uncertain dynamic environments. Such systems are realizable thanks in large part to major advances in perception and decision-making techniques, which in turn have been propelled forward by modern machine learning tools. However, these newer forms of intelligent autonomy raise questions about when/how communication of the operational intent and assessments of actual vs. supposed capabilities of autonomous agents impact overall performance. This symposium examines the possibilities for enabling intelligent autonomous systems to self-assess and communicate their ability to effectively execute assigned tasks, as well as reason about the overall limits of their competencies and maintain operability within those limits. The symposium brings together researchers working in this burgeoning area of research to share lessons learned, identify major theoretical and practical challenges encountered so far, and potential avenues for future research and real-world applications. | AAAI 2022 Fall Symposium: Lessons Learned for Autonomous Assessment of Machine Abilities (LLAAMA) |
Semi-competing risks refer to the setting where primary scientific interest lies in estimation and inference with respect to a non-terminal event, the occurrence of which is subject to a terminal event. In this paper, we present the R package SemiCompRisks that provides functions to perform the analysis of independent/clustered semi-competing risks data under the illness-death multi-state model. The package allows the user to choose the specification for model components from a range of options giving users substantial flexibility, including: accelerated failure time or proportional hazards regression models; parametric or non-parametric specifications for baseline survival functions; parametric or non-parametric specifications for random effects distributions when the data are cluster-correlated; and, a Markov or semi-Markov specification for terminal event following non-terminal event. While estimation is mainly performed within the Bayesian paradigm, the package also provides the maximum likelihood estimation for select parametric models. The package also includes functions for univariate survival analysis as complementary analysis tools. | SemiCompRisks: An R Package for Independent and Cluster-Correlated Analyses of Semi-Competing Risks Data |
We define a set of orthogonal functions on the complex projective space CP^{N-1}, and compute their Clebsch-Gordan coefficients as well as a large class of 6-j symbols. We also provide all the needed formulae for the generation of high-temperature expansions for U(N)-invariant spin models defined on CP^{N-1}. | Pseudo-Character Expansions for U(N)-Invariant Spin Models on CP^{N-1} |
We present the concept of a feedback-based topological acoustic metamaterial as a tool for realizing autonomous and active guiding of sound beams along arbitrary curved paths in free two-dimensional space. The metamaterial building blocks are acoustic transducers, embedded in a slab waveguide. The transducers generate a desired dispersion profile in closed-loop by processing real-time pressure field measurements through preprogrammed controllers. In particular, the metamaterial can be programmed to exhibit analogies of quantum topological wave phenomena, which enables unconventional and exceptionally robust sound beam guiding. As an example, we realize the quantum valley Hall effect by creating, using a collocated pressure feedback, an alternating acoustic impedance pattern across the waveguide. The pattern is traversed by artificial trajectories of different shapes, which are reconfigurable in real-time. Due to topological protection, the sound waves between the plates remain localized on the trajectories, and do not back-scatter by the sharp corners or imperfections in the design. The feedback-based design can be used to realize arbitrary physical interactions in the metamaterial, including non-local, nonlinear, time-dependent, or non-reciprocal couplings, paving the way to new unconventional acoustic wave guiding on the same reprogrammable platform. We then present a non-collocated control algorithm, which mimics another quantum effect, rendering the sound beams uni-directional. | Real-Time Steering of Curved Sound Beams in a Feedback-based Topological Acoustic Metamaterial |
In this paper we study the properties of Bose-Einstein condensates in shallow traps. We discuss the case of a Gaussian potential, but many of our results apply also to the traps having a small quadratic anharmonicity. We show the errors introduced when a Gaussian potential is approximated with a parabolic potential, these errors can be quite large for realistic optical trap parameter values. We study the behavior of the condensate fraction as a function of trap depth and temperature and calculate the chemical potential of the condensate in a Gaussian trap. Finally we calculate the frequencies of the collective excitations in shallow spherically symmetric and 1D traps. | Bose-Einstein condensation in shallow traps |
This paper presents an algorithmic method to study structural properties of nonlinear control systems in dependence of parameters. The result consists of a description of parameter configurations which cause different control-theoretic behaviour of the system (in terms of observability, flatness, etc.). The constructive symbolic method is based on the differential Thomas decomposition into disjoint simple systems, in particular its elimination properties. | Thomas decompositions of parametric nonlinear control systems |
A key problem in computational biology is discovering the gene expression changes that regulate cell fate transitions, in which one cell type turns into another. However, each individual cell cannot be tracked longitudinally, and cells at the same point in real time may be at different stages of the transition process. This can be viewed as a problem of learning the behavior of a dynamical system from observations whose times are unknown. Additionally, a single progenitor cell type often bifurcates into multiple child cell types, further complicating the problem of modeling the dynamics. To address this problem, we developed an approach called variational mixtures of ordinary differential equations. By using a simple family of ODEs informed by the biochemistry of gene expression to constrain the likelihood of a deep generative model, we can simultaneously infer the latent time and latent state of each cell and predict its future gene expression state. The model can be interpreted as a mixture of ODEs whose parameters vary continuously across a latent space of cell states. Our approach dramatically improves data fit, latent time inference, and future cell state estimation of single-cell gene expression data compared to previous approaches. | Variational Mixtures of ODEs for Inferring Cellular Gene Expression Dynamics |
The growth of an interface formed by the hierarchical deposition of particles of unequal size is studied in the framework of a dynamical network generated by a horizontal visibility algorithm. For a deterministic model of the deposition process, the resulting network is scale-free with dominant degree exponent $\gamma_e = \ln{3}/\ln{2}$ and transient exponent $\gamma_o = 1$. An exact calculation of the network diameter and clustering coefficient reveals that the network is scale invariant and inherits the modular hierarchical nature of the deposition process. For the random process, the network remains scale free, where the degree exponent asymptotically converges to $\gamma =3$, independent of the system parameters. This result shows that the model is in the class of fractional Gaussian noise (fGn) through the relation between the degree exponent and the series' Hurst exponent $H$. Finally, we show through the degree-dependent clustering coefficient $C(k)$ that the modularity remains present in the system. | Hierarchical deposition and scale-free networks: a visibility algorithm approach |
A laser interferometric detector of gravitational waves is studied and a complete solution (to first order in the metric perturbation) of the coupled Einstein-Maxwell equations with appropriate boundary conditions for the light beams is determined. The phase shift, the light deflection and the rotation of the polarization axis induced by gravitational waves are computed. The results are compared with previous literature, and are shown to hold also for detectors which are large in comparison with the gravitational wavelength. | Laser Interferometric Detectors of Gravitational Waves |
Motivation: Ontologies are widely used in biology for data annotation, integration, and analysis. In addition to formally structured axioms, ontologies contain meta-data in the form of annotation axioms which provide valuable pieces of information that characterize ontology classes. Annotations commonly used in ontologies include class labels, descriptions, or synonyms. Despite being a rich source of semantic information, the ontology meta-data are generally unexploited by ontology-based analysis methods such as semantic similarity measures. Results: We propose a novel method, OPA2Vec, to generate vector representations of biological entities in ontologies by combining formal ontology axioms and annotation axioms from the ontology meta-data. We apply a Word2Vec model that has been pre-trained on PubMed abstracts to produce feature vectors from our collected data. We validate our method in two different ways: first, we use the obtained vector representations of proteins as a similarity measure to predict protein-protein interaction (PPI) on two different datasets. Second, we evaluate our method on predicting gene-disease associations based on phenotype similarity by generating vector representations of genes and diseases using a phenotype ontology, and applying the obtained vectors to predict gene-disease associations. These two experiments are just an illustration of the possible applications of our method. OPA2Vec can be used to produce vector representations of any biomedical entity given any type of biomedical ontology. Availability: https://github.com/bio-ontology-research-group/opa2vec Contact: [email protected] and [email protected]. | OPA2Vec: combining formal and informal content of biomedical ontologies to improve similarity-based prediction |
In this paper we are concerned with the stabilization of MUSCL-type finite volume schemes in arbitrary space dimensions. We consider a number of limited reconstruction techniques which are defined in terms inequality-constrained linear or quadratic programming problems on individual grid elements. No restrictions to the conformity of the grid or the shape of its elements are made. In the special case of Cartesian meshes a novel QP reconstruction is shown to coincide with the widely used Minmod reconstruction. The accuracy and overall efficiency of the stabilized second-order finite volume schemes is supported by numerical experiments. | Constrained Reconstruction in MUSCL-type Finite Volume Schemes |
The coherent Ising machine is expected to find a near-optimal solution in various combinatorial optimization problems, which has been experimentally confirmed with optical parametric oscillators (OPOs) and a field programmable gate array (FPGA) circuit. The similar mathematical models were proposed three decades ago by J. J. Hopfield, et al. in the context of classical neural networks. In this article, we compare the computational performance of both models. | Performance evaluation of coherent Ising machines against classical neural networks |
We searched for long period variation in V-band, Ic-band and RXTE X-ray light curves of the High Mass X-ray Binaries (HMXBs) LS 1698 / RX J1037.5-5647, HD 110432 / 1H 1249-637 and HD 161103 / RX J1744.7-2713 in an attempt to discover orbitally induced variation. Data were obtained primarily from the ASAS database and were supplemented by shorter term observations made with the 24- and 40-inch ANU telescopes and one of the robotic PROMPT telescopes. Fourier periodograms suggested the existence of long period variation in the V-band light curves of all three HMXBs, however folding the data at those periods did not reveal convincing periodic variation. At this point we cannot rule out the existence of long term V-band variation for these three sources and hints of longer term variation may be seen in the higher precision PROMPT data. Long term V-band observations, on the order of several years, taken at a frequency of at least once per week and with a precision of 0.01 mag, therefore still have a chance of revealing long term variation in these three HMXBs. | Photometric Observations of Three High Mass X-Ray Binaries and a Search for Variations Induced by Orbital Motion |
We seek for self-similar solutions describing the time-dependent evolution of self-gravity systems with either spherical symmetry or axisymmetric disk geometry. By assuming self-similar variable $x\equiv r/at$ where $a$ is isothermal sound speed we find self-similar solutions extending from the initial instant $t=0$ to the final stage $t\to \infty$ using standard semi-analytical methods. Different types of solutions are constructed, which describe overall expansion or collapse, envelope expansion with core collapse (EECC), the formation of central rotationally supported quasi-equilibrium disk as well as shocks. Though infinitely many, these self-similarity solutions have similar asymptotic behaviors which may impose diagnosis on the velocity and density structures in astrophysical systems. | Outflows and inflows in astrophysical systems |
We discuss the spin-dependence of the effective two-body interactions appropriate for three-body computations. The only reasonable choice seems to be the fine and hyperfine interactions known for atomic electrons interacting with the nucleus. One exception is the nucleon-nucleon interaction imposing a different type of symmetry. We use the two-neutron halo nucleus 11Li as illustration. We demonstrate that models with the wrong spin-dependence are basically without predictive power. The Pauli forbidden core and valence states must be consistently treated. | Spin-dependent effective interactions for halo nuclei |