text
stringlengths
121
2.54k
summary
stringlengths
23
219
Self-adaptation is a promising approach to manage the complexity of modern software systems. A self-adaptive system is able to adapt autonomously to internal dynamics and changing conditions in the environment to achieve particular quality goals. Our particular interest is in decentralized self-adaptive systems, in which central control of adaptation is not an option. One important challenge in self-adaptive systems, in particular those with decentralized control of adaptation, is to provide guarantees about the intended runtime qualities. In this paper, we present a case study in which we use model checking to verify behavioral properties of a decentralized self-adaptive system. Concretely, we contribute with a formalized architecture model of a decentralized traffic monitoring system and prove a number of self-adaptation properties for flexibility and robustness. To model the main processes in the system we use timed automata, and for the specification of the required properties we use timed computation tree logic. We use the Uppaal tool to specify the system and verify the flexibility and robustness properties.
A Case Study on Formal Verification of Self-Adaptive Behaviors in a Decentralized System
H. Lenstra has pointed out that a cubic polynomial of the form (x-a)(x-b)(x-c) + r(x-d)(x-e), where {a,b,c,d,e} is some permutation of {0,1,2,3,4}, is irreducible modulo 5 because every possible linear factor divides one summand but not the other. We classify polynomials over finite fields that admit an irreducibility proof with this structure.
Visibly irreducible polynomials over finite fields
We use a set of high-resolution cosmological N-body simulations to investigate the inner mass profile of galaxy-sized cold dark matter (CDM) halos. These simulations extend the thorough numerical convergence study presented in Paper I of this series (Power et al. 2003), and demonstrate that the mass profile of CDM halos can be robustly estimated beyond a minimum converged radius of order r_conv ~ 1 kpc/h in our highest resolution runs. The density profiles of simulated halos become progressively shallow from the virial radius inwards, and show no sign of approaching a well-defined power-law behaviour near the centre. At r_conv, the logarithmic slope of the density profile is steeper than the asymptotic \rho \propto r^-1 expected from the formula proposed by Navarro, Frenk, and White (1996), but significantly shallower than the steeply divergent \rho \propto r^-1.5 cusp proposed by Moore et al. (1999). We perform a direct comparison of the spherically-averaged dark matter circular velocity (V_c) profiles with rotation curves of low surface brightness (LSB) galaxies from the samples of de Blok et al. (2001), de Blok and Bosma (2002), and Swaters et al. (2003). Most (about two-thirds) LSB galaxies in this dataset are roughly consistent with CDM halo V_c profiles. However, about one third of LSBs in these samples feature a sharp transition between the rising and flat part of the rotation curve that is not seen in the V_c profiles of CDM halos. This discrepancy has been interpreted as excluding the presence of cusps, but we argue that it might simply reflect the difference between circular velocity and gas rotation speed likely to arise in gaseous disks embedded within realistic, triaxial CDM halos.
The Inner Structure of LCDM Halos II: Halo Mass Profiles and LSB Rotation Curves
Motivated by applications to proving regularity of solutions to degenerate parabolic equations arising in population genetics, we study existence, uniqueness and the strong Markov property of weak solutions to a class of degenerate stochastic differential equations. The stochastic differential equations considered in our article admit solutions supported in the set $[0,\infty)^n\times\mathbb{R}^m$, and they are degenerate in the sense that the diffusion matrix is not strictly elliptic, as the smallest eigenvalue converges to zero proportional to the distance to the boundary of the domain, and the drift coefficients are allowed to have power-type singularities in a neighborhood of the boundary of the domain. Under suitable regularity assumptions on the coefficients, we establish existence of weak solutions that satisfy the strong Markov property, and uniqueness in law in the class of Markov processes.
Existence, uniqueness and the strong Markov property of solutions to Kimura diffusions with singular drift
Recently a number of analytic prescriptions for computing the non-linear matter power spectrum have appeared in the literature. These typically involve resummation or closure prescriptions which do not have a rigorous error control, thus they must be compared with numerical simulations to assess their range of validity. We present a direct side-by-side comparison of several of these analytic approaches, using a suite of high-resolution N-body simulations as a reference, and discuss some general trends. All of the analytic results correctly predict the behavior of the power spectrum at the onset of non-linearity, and improve upon a pure linear theory description at very large scales. All of these theories fail at sufficiently small scales. At low redshift the dynamic range in scale where perturbation theory is both relevant and reliable can be quite small. We also compute for the first time the 2-loop contribution to standard perturbation theory for CDM models, finding improved agreement with simulations at large redshift. At low redshifts however the 2-loop term is larger than the 1-loop term on quasi-linear scales, indicating a breakdown of the perturbation expansion. Finally, we comment on possible implications of our results for future studies.
A critical look at cosmological perturbation theory techniques
We present high-resolution cosmological hydrodynamic simulations of three galaxy clusters employing a two-temperature model for the intracluster medium. We show that electron temperatures in cluster outskirts are significantly lower than the mean gas temperature, because Coulomb collisions are insufficient to keep electrons and ions in thermal equilibrium. This deviation is larger in more massive and less relaxed systems, ranging from 5% in relaxed clusters to 30% for clusters undergoing major mergers. The presence of non-equilibrium electrons leads to significant suppression of the SZE signal at large cluster-centric radius. The suppression of the electron pressure also leads to an underestimate of the hydrostatic mass. Merger-driven, internal shocks may also generate significant populations of non-equilibrium electrons in the cluster core, leading to a 5% bias on the integrated SZ mass proxy during cluster mergers.
Non-Equilibrium Electrons and the Sunyaev-Zel'dovich Effect of Galaxy Clusters
We provide the optimal measurement strategy for a class of noisy channels that reduce to the identity channel for a specific value of a parameter (spreading channels). We provide an example that is physically relevant: the estimation of the absolute value of the displacement in the presence of phase randomizing noise. Surprisingly, this noise does not affect the effectiveness of the optimal measurement. We show that, for small displacement, a squeezed vacuum probe field is optimal among strategies with same average energy. A squeezer followed by photodetection is the optimal detection strategy that attains the quantum Fisher information, whereas the customarily used homodyne detection becomes useless in the limit of small displacements, due to the same effect that gives Rayleigh's curse in optical superresolution. There is a quantum advantage: a squeezed or a Fock state with $N$ average photons allow to asymptotically estimate the parameter with a $\sqrt{N}$ better precision than classical states with same energy.
Quantum metrology of noisy spreading channels
In dilute suspensions of swimming microorganisms the local fluid velocity is a random superposition of the flow fields set up by the individual organisms, which in turn have multipole contributions decaying as inverse powers of distance from the organism. Here we show that the conditions under which the central limit theorem guarantees a Gaussian probability distribution function of velocities are satisfied when the leading force singularity is a Stokeslet, but are not when it is any higher multipole. These results are confirmed by numerical studies and by experiments on suspensions of the alga Volvox carteri, which show that deviations from Gaussianity arise from near-field effects.
Fluid Velocity Fluctuations in a Suspension of Swimming Protists
The Coronavirus Disease (COVID-19) has affected 1.8 million people and resulted in more than 110,000 deaths as of April 12, 2020. Several studies have shown that tomographic patterns seen on chest Computed Tomography (CT), such as ground-glass opacities, consolidations, and crazy paving pattern, are correlated with the disease severity and progression. CT imaging can thus emerge as an important modality for the management of COVID-19 patients. AI-based solutions can be used to support CT based quantitative reporting and make reading efficient and reproducible if quantitative biomarkers, such as the Percentage of Opacity (PO), can be automatically computed. However, COVID-19 has posed unique challenges to the development of AI, specifically concerning the availability of appropriate image data and annotations at scale. In this paper, we propose to use synthetic datasets to augment an existing COVID-19 database to tackle these challenges. We train a Generative Adversarial Network (GAN) to inpaint COVID-19 related tomographic patterns on chest CTs from patients without infectious diseases. Additionally, we leverage location priors derived from manually labeled COVID-19 chest CTs patients to generate appropriate abnormality distributions. Synthetic data are used to improve both lung segmentation and segmentation of COVID-19 patterns by adding 20% of synthetic data to the real COVID-19 training data. We collected 2143 chest CTs, containing 327 COVID-19 positive cases, acquired from 12 sites across 7 countries. By testing on 100 COVID-19 positive and 100 control cases, we show that synthetic data can help improve both lung segmentation (+6.02% lesion inclusion rate) and abnormality segmentation (+2.78% dice coefficient), leading to an overall more accurate PO computation (+2.82% Pearson coefficient).
3D Tomographic Pattern Synthesis for Enhancing the Quantification of COVID-19
The properties of 5D gravitational flux tubes are considered. With the cross section and 5th dimension in the Planck region such tubes can be considered as string-like objects, namely $\Delta-$strings. A model of attachment of $\Delta-$string to a spacetime is offered. It is shown that the attachment point is a model of an electric charge for an observer living in the spacetime. Magnetic charges are forbidden in this model.
Some properties of a $\Delta-$string
It has recently become feasible to run personal digital assistants on phones and other personal devices. In this paper we describe a design for a natural language understanding system that runs on device. In comparison to a server-based assistant, this system is more private, more reliable, faster, more expressive, and more accurate. We describe what led to key choices about architecture and technologies. For example, some approaches in the dialog systems literature are difficult to maintain over time in a deployment setting. We hope that sharing learnings from our practical experiences may help inform future work in the research community.
Intelligent Assistant Language Understanding On Device
The question of what is genuinely quantum about weak values is only ever going to elicit strongly subjective opinions---it is not a scientific question. Good questions, when comparing theories, are operational---they deal with the unquestionable outcomes of experiment. We give the anomalous shift of weak values an objective meaning through a generalization to an operational definition of anomalous post-selected averages. We show the presence of these averages necessitate correlations in every model giving rise to them---quantum or classical. Characterizing such correlations shows that they are ubiquitous. We present the simplest classical example without the need of disturbance realizing these generalized anomalous weak values.
Classical correlation alone supplies the anomaly to weak values
The process of lunar magma ocean solidification provides constraints on the properties of distinct chemical reservoirs in the lunar mantle that formed during the early evolution of the Moon. We use a combination of phase equilibria models consistent with experimental results on lunar magma ocean crystallization to study the effect of bulk silicate Moon composition on the properties of lunar mantle reservoirs. We find that the densities and relative proportions of these mantle reservoirs, in particular of the late forming ilmenite bearing cumulates (IBC), strongly depend on the FeO content of the bulk silicate Moon. This relation has implications for post-magma ocean mantle dynamics and the mass distribution in the lunar interior, because the dense IBC form at shallow depths but tend to sink towards the core mantle boundary. We quantify the relations between bulk silicate Moon FeO content, IBC thickness and bulk Moon density as well as mantle stratigraphy and bulk silicate Moon moment of inertia to constrain the bulk silicate Moon FeO content and the efficiency of IBC sinking. In combination with seismic and selenodetic constraints on mantle stratigraphy, core radius, extent of the low velocity zone at the core mantle boundary, considerations about the present day selenotherm and the effects of reservoir mixing by convection our model indicates that the bulk silicate Moon is only moderately enriched in FeO compared to the Earths mantle and contains about 9.4 - 10.9 weight percent FeO (with a lowermost limit of 8.3 weight percent and an uppermost limit of 11.9 weight percent). We also conclude that the observed bulk silicate Moon moment of inertia requires incomplete sinking of the IBC layer by mantle convection: only 20 - 60 percent of the IBC material might have reached the core mantle boundary, while the rest either remained at the depth of origin or was mixed into the middle mantle.
Employing magma ocean crystallization models to constrain structure and composition of the lunar interior
I present results for soft anomalous dimensions through three loops for many QCD processes. In particular, I give detailed expressions for soft anomalous dimensions in various processes with electroweak and Higgs bosons as well as single top quarks and top-antitop pairs.
Three-loop soft anomalous dimensions in QCD
An analog hadron calorimeter (AHCAL) prototype of 5.3 nuclear interaction lengths thickness has been designed and constructed by members of the CALICE Collaboration. The AHCAL prototype consists of a 38-layer sandwich structure of steel plates and 7608 scintillator tiles that are read out by wavelength-shifting fibres coupled to SiPMs. The signal is amplified and shaped with a custom-designed ASIC. A calibration/monitoring system based on LED light was developed to monitor the SiPM gain and to measure the full SiPM response curve in order to correct for non-linearity. Ultimately, the physics goals are the study of hadronic shower shapes and testing the concept of particle flow. The technical goal consists of measuring the performance and reliability of 7608 SiPMs. The AHCAL prototype was commissioned in test beams at DESY, CERN and FNAL, and recorded hadronic showers, electron showers and muons at different energies and incident angles.
Comparison of hadron shower data with simulations
We present a method for constraining the evolution of the galaxy luminosity-velocity (LV) relation in hierarchical scenarios of structure formation. The comoving number density of dark-matter halos with circular velocity of 200 km/s is predicted in favored CDM cosmologies to be nearly constant over the redshift range 0<z<5. Any observed evolution in the density of bright galaxies implies in turn a corresponding evolution in the LV relation. We consider several possible forms of evolution for the zero-point of the LV relation and predict the corresponding evolution in galaxy number density. The Hubble Deep Field suggests a large deficit of bright (M_V < -19) galaxies at 1.4 < z < 2. If taken at face value, this implies a dimming of the LV zero-point by roughly 2 magnitudes. Deep, wide-field, near-IR selected surveys will provide more secure measurements to compare with our predictions.
Strong Evolution in the Luminosity-Velocity Relation at z>1?
We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron. The formalism is constructed specifically to ensure that physical kinematic thresholds for the semi-inclusive process are explicitly respected. A systematic study of the kinematic dependencies of the mass corrections to semi-inclusive cross sections reveals that these are even larger than for inclusive structure functions, especially at very small and very large hadron momentum fractions. The hadron mass corrections compete with the experimental uncertainties at kinematics typical of current facilities, and will be important to efforts at extracting parton distributions or fragmentation functions from semi-inclusive processes at intermediate energies.
Hadron mass corrections in semi-inclusive deep inelastic scattering
We performed a multi-wavelength observation toward LkHa 101 embedded cluster and its adjacent 85arcmin*60arcmin region. The LkHa 101 embedded cluster is the first and only one significant cluster in California molecular cloud (CMC). These observations have revealed that the LkHa 101 embedded cluster is just located at the projected intersectional region of two filaments. One filament is the highest-density section of the CMC, the other is a new identified filament with a low-density gas emission. Toward the projected intersection, we find the bridging features connecting the two filaments in velocity, and identify a V-shape gas structure. These agree with the scenario that the two filaments are colliding with each other. Using the Five-hundred-meter Aperture Spherical radio Telescope (FAST), we measured that the RRL velocity of the LkHa 101 H II region is 0.5 km/s, which is related to the velocity component of the CMC filament. Moreover, there are some YSOs distributed outside the intersectional region. We suggest that the cloud-cloud collision together with the fragmentation of the main filament may play an important role in the YSOs formation of the cluster.
First embedded cluster formation in California molecular cloud
Low-light image enhancement strives to improve the contrast, adjust the visibility, and restore the distortion in color and texture. Existing methods usually pay more attention to improving the visibility and contrast via increasing the lightness of low-light images, while disregarding the significance of color and texture restoration for high-quality images. Against above issue, we propose a novel luminance and chrominance dual branch network, termed LCDBNet, for low-light image enhancement, which divides low-light image enhancement into two sub-tasks, e.g., luminance adjustment and chrominance restoration. Specifically, LCDBNet is composed of two branches, namely luminance adjustment network (LAN) and chrominance restoration network (CRN). LAN takes responsibility for learning brightness-aware features leveraging long-range dependency and local attention correlation. While CRN concentrates on learning detail-sensitive features via multi-level wavelet decomposition. Finally, a fusion network is designed to blend their learned features to produce visually impressive images. Extensive experiments conducted on seven benchmark datasets validate the effectiveness of our proposed LCDBNet, and the results manifest that LCDBNet achieves superior performance in terms of multiple reference/non-reference quality evaluators compared to other state-of-the-art competitors. Our code and pretrained model will be available.
Division Gets Better: Learning Brightness-Aware and Detail-Sensitive Representations for Low-Light Image Enhancement
Possible high-$T_c$ superconductivity (SC) has been found experimentally in the bilayer material La$_3$Ni$_2$O$_7$ under high pressure recently, in which the Ni-$3d_{3z^2-r^2}$ and $3d_{x^2-y^2}$ orbitals are expected to play a key role in the electronic structure and the SC. Here we study the two-orbital electron correlations and the nature of the SC using the bilayer two-orbital Hubbard model downfolded from the band structure of La3Ni2O7 in the framework of the dynamical mean-field theory. We find that each of the two orbitals forms $s_\pm$-wave SC pairing. Because of the nonlocal inter-orbital hoppings, the two-orbital SCs are concomitant and they transition to Mott insulating states simultaneously when tuning the system to half filling. The Hund's coupling induced local inter-orbital spin coupling enhances the electron correlations pronouncedly and is crucial to the SC.
Correlation Effects and Concomitant Two-Orbital $s_\pm$-Wave Superconductivity in La$_3$Ni$_2$O$_7$ under High Pressure
Granular media take on great importance in industry and geophysics, posing a severe challenge to materials science. Their response properties elude known soft rheological models, even when the yield-stress discontinuity is blurred by vibro-fluidization. Here we propose a broad rheological scenario where average stress sums up a frictional contribution, generalizing conventional $\mu(I)$-rheology, and a kinetic collisional term dominating at fast fluidization. Our conjecture fairly describes a wide series of experiments in a vibrofluidized vane setup, whose phenomenology includes velocity weakening, shear thinning, a discontinuous thinning transition, and gaseous shear thickening. The employed setup gives access to dynamic fluctuations, which exhibit a broad range of timescales. In the slow dense regime the frequency of cage-opening increases with stress and enhances, with respect to $\mu(I)$-rheology, the decrease of viscosity. Diffusivity is exponential in the shear stress in both thinning and thickening regimes, with a huge growth near the transition.
Unified rheology of vibro-fluidized dry granular media: From slow dense flows to fast gas-like regimes
Discriminant analysis, including linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA), is a popular approach to classification problems. It is well known that LDA is suboptimal to analyze heteroscedastic data, for which QDA would be an ideal tool. However, QDA is less helpful when the number of features in a data set is moderate or high, and LDA and its variants often perform better due to their robustness against dimensionality. In this work, we introduce a new dimension reduction and classification method based on QDA. In particular, we define and estimate the optimal one-dimensional (1D) subspace for QDA, which is a novel hybrid approach to discriminant analysis. The new method can handle data heteroscedasticity with number of parameters equal to that of LDA. Therefore, it is more stable than the standard QDA and works well for data in moderate dimensions. We show an estimation consistency property of our method, and compare it with LDA, QDA, regularized discriminant analysis (RDA) and a few other competitors by simulated and real data examples.
Quadratic Discriminant Analysis by Projection
We prove that the Gauss map of a surface of constant mean curvature embedded in Minkowski space is harmonic. This fact will then be used to study 2+1 gravity for surfaces of genus higher than one. By considering the energy of the Gauss map, a canonical transform between the ADM reduced variables and holonomy variables can be constructed. This allows one to solve (in principle) for the evolution in the ADM variables without having to explicitly solve the constraints first.
The Gauss Map and 2+1 Gravity
The spectacular head-on collision of the two gas-rich galaxies of the Taffy system, UGC 12914/15, gives us a unique opportunity to study the consequences of a direct ISM-ISM collision. To interpret existing multi-wavelength observations, we made dynamical simulations of the Taffy system including a sticky particle component. To compare simulation snapshots to HI and CO observations, we assume that the molecular fraction of the gas depends on the square root of the gas volume density. For the comparison of our simulations with observations of polarized radio continuum emission, we calculated the evolution of the 3D large-scale magnetic field for our simulations. The induction equations including the time-dependent gas-velocity fields from the dynamical model were solved for this purpose. Our simulations reproduce the stellar distribution of the primary galaxy, UGC 12914, the prominent HI and CO gas bridge, the offset between the CO and HI emission in the bridge, the bridge isovelocity vectors parallel to the bridge, the HI double-line profiles in the bridge region, the large line-widths (~200 km/s) in the bridge region, the high field strength of the bridge large-scale regular magnetic field, the projected magnetic field vectors parallel to the bridge and the strong total power radio continuum emission from the bridge. The stellar distribution of the secondary model galaxy is more perturbed than observed. The observed distortion of the HI envelope of the Taffy system is not reproduced by our simulations which use initially symmetric gas disks. The model allows us to define the bridge region in three dimensions. We estimate the total bridge gas mass (HI, warm and cold H2) to be 5 to 6 10^9 M_sun, with a molecular fraction M_H2/M_HI of about unity (abrigded).
A dynamical model for the Taffy galaxies UGC 12914/5
We present point-source catalogs for the ~2 Ms exposure of the Chandra Deep Field-South (CDF-S); this is one of the two most-sensitive X-ray surveys ever performed. The survey covers an area of ~436 arcmin^2 and reaches on-axis sensitivity limits of ~1.9x10^{-17} and ~1.3x10^{-16} ergs/cm^2/s for the 0.5-2.0 and 2-8 keV bands, respectively. Four hundred and sixty-two X-ray point sources are detected in at least one of three X-ray bands that were searched; 135 of these sources are new compared to the previous ~1 Ms CDF-S detections. Source positions are determined using centroid and matched-filter techniques; the median positional uncertainty is ~0.36". The X-ray-to-optical flux ratios of the newly detected sources indicate a variety of source types; ~55% of them appear to be active galactic nuclei while ~45% appear to be starburst and normal galaxies. In addition to the main Chandra catalog, we provide a supplementary catalog of 86 X-ray sources in the ~2 Ms CDF-S footprint that was created by merging the ~250 ks Extended Chandra Deep Field-South with the CDF-S; this approach provides additional sensitivity in the outer portions of the CDF-S. A second supplementary catalog that contains 30 X-ray sources was constructed by matching lower significance X-ray sources to bright optical counterparts (R<23.8); the majority of these sources appear to be starburst and normal galaxies. The total number of sources in the main and supplementary catalogs is 578. R-band optical counterparts and basic optical and infrared photometry are provided for the X-ray sources in the main and supplementary catalogs. We also include existing spectroscopic redshifts for 224 of the X-ray sources. (Abstract abridged)
The Chandra Deep Field-South Survey: 2 Ms Source Catalogs
In this paper, the linear Gaussian relay problem is considered. Under the linear time-invariant (LTI) model the problem is formulated in the frequency domain based on the Toeplitz distribution theorem. Under the further assumption of realizable input spectra, the LTI Gaussian relay problem is converted to a joint design problem of source and relay filters under two power constraints, one at the source and the other at the relay, and a practical solution to this problem is proposed based on the projected subgradient method. Numerical results show that the proposed method yields a noticeable gain over the instantaneous amplify-and-forward (AF) scheme in inter-symbol interference (ISI) channels. Also, the optimality of the AF scheme within the class of one-tap relay filters is established in flat-fading channels.
A joint time-invariant filtering approach to the linear Gaussian relay problem
The subreddit r/The_Donald was repeatedly denounced as a toxic and misbehaving online community, reasons for which it faced a sequence of increasingly constraining moderation interventions by Reddit administrators. It was quarantined in June 2019, restricted in February 2020, and finally banned in June 2020, but despite precursory work on the matter, the effects of this sequence of interventions are still unclear. In this work, we follow a multidimensional causal inference approach to study data containing more than 15M posts made in a time frame of 2 years, to examine the effects of such interventions inside and outside of the subreddit. We find that the interventions greatly reduced the activity of problematic users. However, the interventions also caused an increase in toxicity and led users to share more polarized and less factual news. In addition, the restriction had stronger effects than the quarantine, and core users of r/The_Donald suffered stronger effects than the rest of users. Overall, our results provide evidence that the interventions had mixed effects and paint a nuanced picture of the consequences of community-level moderation strategies. We conclude by reflecting on the challenges of policing online platforms and on the implications for the design and deployment of moderation interventions.
Make Reddit Great Again: Assessing Community Effects of Moderation Interventions on r/The_Donald
This paper introduces a Nearly Unstable INteger-valued AutoRegressive Conditional Heteroskedasticity (NU-INARCH) process for dealing with count time series data. It is proved that a proper normalization of the NU-INARCH process endowed with a Skorohod topology weakly converges to a Cox-Ingersoll-Ross diffusion. The asymptotic distribution of the conditional least squares estimator of the correlation parameter is established as a functional of certain stochastic integrals. Numerical experiments based on Monte Carlo simulations are provided to verify the behavior of the asymptotic distribution under finite samples. These simulations reveal that the nearly unstable approach provides satisfactory and better results than those based on the stationarity assumption even when the true process is not that close to non-stationarity. A unit root test is proposed and its Type-I error and power are examined via Monte Carlo simulations. As an illustration, the proposed methodology is applied to the daily number of deaths due to COVID-19 in the United Kingdom.
Nearly Unstable Integer-Valued ARCH Process and Unit Root Testing
Compton scattering is one of the promising probe to quantitate of the Li under in-operando condition, since high-energy X-rays which have high penetration power into the materials are used as incident beam and Compton scattered energy spectrum have specific line-shape by the elements. We develop in-operando quantitation method of Li composition in the electrodes by using line-shape (Sparameter) analysis of Compton scattered energy spectrum. In this study, we apply S-parameter analysis to commercial coin cell Li-ion rechargeable battery and obtain the variation of S-parameters during charge/discharge cycle at positive and negative electrodes. By using calibration curves for Li composition in the electrodes, we determine the change of Li composition of positive and negative electrodes through S-parameters, simultaneously.
In-operando quantitation of Li concentration for commercial Li-ion rechargeable battery using high-energy X-ray Compton scattering
The holographic entanglement entropy of an infinite strip subsystem on the asymptotic AdS boundary is used as a probe to study the thermodynamic instabilities of planar R-charged black holes (or their dual field theories). We focus on the single-charge AdS black holes in $D=5$, which correspond to spinning D3-branes with one non-vanishing angular momentum. Our results show that the holographic entanglement entropy indeed exhibits the thermodynamic instability associated with the divergence of the specific heat. When the width of the strip is large enough, the finite part of the holographic entanglement entropy as a function of the temperature resembles the thermal entropy, as is expected. As the width becomes smaller, however, the two entropies behave differently. In particular, there exists a critical value for the width of the strip, below which the finite part of the holographic entanglement entropy as a function of the temperature develops a self-intersection. We also find similar behavior in the single-charge black holes in $D=4$ and $7$.
Holographic entanglement entropy and thermodynamic instability of planar R-charged black holes
Given a Boolean algebra B and an embedding e:B -> P(N)/fin we consider the possibility of extending each or some automorphism of B to the whole P(N)/fin. Among other things, we show, assuming CH, that for a wide class of Boolean algebras there are embeddings for which no non-trivial automorphism can be extended.
Embeddings into P(N)/fin and extension of automorphisms
We derive formulae for some ratios of the Macdonald functions, which are simpler and easier to treat than known formulae. The result gives two applications in probability theory. One is the formula for the L{\'e}vy measure of the distribution of the first hitting time of a Bessel process and the other is an explicit form for the expected volume of the Wiener sausage for an even dimensional Brownian motion. Moreover, the result enables us to write down the algebraic equations whose roots are the zeros of Macdonald functions.
Hitting times of Bessel processes, volume of Wiener sausages and zeros of Macdonald functions
We study a standard-embedding $N=2$ heterotic string compactification on $K3\times T^2$ with a Wilson line turned on and perform a world-sheet calculation of string threshold correction. The result can be expressed in terms of the quantities appearing in the two-loop calculation of bosonic string. We also comment and speculate on the relevance of our result to generalized Kac-Moody superalgebra and $N=2$ heterotic-type IIA duality.
$N=2$ heterotic string threshold correction, $K3$ surface and generalized Kac-Moody superalgebra
We propose the concepts of intersection distribution and non-hitting index, which can be viewed from two related perspectives. The first one concerns a point set $S$ of size $q+1$ in the classical projective plane $PG(2,q)$, where the intersection distribution of $S$ indicates the intersection pattern between $S$ and the lines in $PG(2,q)$. The second one relates to a polynomial $f$ over a finite field $\mathbb{F}_q$, where the intersection distribution of $f$ records an overall distribution property of a collection of polynomials $\{f(x)+cx \mid c \in \mathbb{F}_q\}$. These two perspectives are closely related, in the sense that each polynomial produces a $(q+1)$-set in a canonical way and conversely, each $(q+1)$-set with certain property has a polynomial representation. Indeed, the intersection distribution provides a new angle to distinguish polynomials over finite fields, based on the geometric property of the corresponding $(q+1)$-sets. Among the intersection distribution, we identify a particularly interesting quantity named non-hitting index. For a point set $S$, its non-hitting index counts the number of lines in $PG(2,q)$ which do not hit $S$. For a polynomial $f$ over a finite field $\mathbb{F}_q$, its non-hitting index gives the summation of the sizes of $q$ value sets $\{f(x)+cx \mid x \in \mathbb{F}_q\}$, where $c \in \mathbb{F}_q$. We derive bounds on the non-hitting index and show that the non-hitting index contains much information about the corresponding set and the polynomial. More precisely, using a geometric approach, we show that the non-hitting index is sufficient to characterize the corresponding point set and the polynomial when it is close to the lower and upper bounds. Moreover, we employ an algebraic approach to derive the intersection distribution of several families of point sets and polynomials, and compute the sizes of related Kakeya sets in affine planes.
Intersection distribution, non-hitting index and Kakeya sets in affine planes
Let $G$ be a compact Lie group. (Compact) topological $G$-manifolds have the $G$-homotopy type of (finite-dimensional) countable $G$-CW complexes (2.5). This partly generalizes Elfving's theorem for locally linear $G$-manifolds [Elf96], wherein the Lie group $G$ is linear (such as compact).
Countable approximation of topological $G$-manifolds: compact Lie groups $G$
The jumping-droplet condensation, namely the out-of-plane jumping of condensed droplets upon coalescence, has been a promising technical innovation in the fields of energy harvesting, droplet manipulation, thermal management, etc., yet is limited owing to the challenge of enabling a sustainable and programmable control. Here, we characterized the morphological evolutions and dynamic behaviors of nanoscale condensates on different nanopillar surfaces, and found that there exists an unrevealed domino effect throughout the entire droplet lifecycle and the coalescence is not the only mechanism to access the droplet jumping. The vapor nucleation preferentially occurs in structure intervals, thus the formed liquid embryos incubate and grow in a spatially confined mode, which stores an excess surface energy and simultaneously provides a asymmetric Laplace pressure, stimulating the trapped droplets to undergo a dewetting transition or even a self-jumping, which can be facilitated by the tall and dense nanostructures. Subsequently, the adjacent droplets merge mutually and further trigger more multifarious self-propelled behaviors that are affected by underlying surface nanostructure, including dewetting transition, coalescence-induced jumping and jumping relay. Moreover, an improved energy-based model was developed by considering the nano-physical effects, the theoretical prediction not only extends the coalescence-induced jumping to the nanometer-sized droplets but also correlates the surface nanostructure topology to the jumping velocity. Such a cumulative effect of nucleation-growth-coalescence on the ultimate morphology of droplet may offer a new strategy for designing functional nanostructured surfaces that serve to orientationally manipulate, transport and collect droplets, and motivate surface engineers to achieve the performance ceiling of the jumping-droplet condensation.
Sequential Self-Propelled Morphology Transitions of Nanoscale Condensates Diversify the Jumping-Droplet Condensation
Context: Polycyclic Aromatic Hydrocarbons, largely known as PAHs, are widespread in the universe and have been identified in a vast array of astronomical observations from the interstellar medium to protoplanetary discs. They are likely to be associated with the chemical history of the universe and the emergence of life on Earth. However, their abundance on exoplanets remains unknown. Aims: We aim to investigate the feasibility of PAH formation in the thermalized atmospheres of irradiated and non-irradiated hot Jupiters around Sun-like stars. Methods: To this aim, we introduced PAHs in the 1-D self-consistent forward modeling code petitCODE. We simulated a large number of planet atmospheres with different parameters (e.g. carbon to oxygen ratio, metallicity, and effective planetary temperature) to study PAH formation. By coupling the thermochemical equilibrium solution from petitCODE with the 1-D radiative transfer code, petitRADTRANS, we calculated the synthetic transmission and emission spectra for irradiated and non-irradiated planets, respectively, and explored the role of PAHs on planet spectra. Results: Our models show strong correlations between PAH abundance and the aforementioned parameters. In thermochemical equilibrium scenarios, an optimal temperature, elevated carbon to oxygen ratio, and increased metallicity values are conducive to the formation of PAHs, with the carbon to oxygen ratio having the largest effect.
Polycyclic Aromatic Hydrocarbons in Exoplanet Atmospheres I. Thermochemical Equilibrium Models
This paper reports on the first observation of electroweak production of single top quarks by the DZero and CDF collaborations. At Fermilab's 1.96 TeV proton-antiproton collider, a few thousand events are selected from several inverse femtobarns of data that contain an isolated electron or muon and/or missing transverse energy, together with jets that originate from the decays of b quarks. Using sophisticated multivariate analyses to separate signal from background, the DZero collaboration measures a cross section sigma(ppbar->tb+X,tqb+X) = 3.94 +- 0.88 pb (for a top quark mass of 170 GeV) and the CDF collaboration measures a value of 2.3_0.6 -0.5 pb (for a top quark mass of 175 GeV). These values are consistent with theoretical predictions at next-to-leading order precision. Both measurements have a significance of 5.0 standard deviations, meeting the benchmark to be considered unambiguous observation.
Observation of Single Top Quark Production at the Tevatron
We undertake a systematic review of some results concerning local well-posedness of the Cauchy problem for certain systems of nonlinear wave equations, with minimal regularity assumptions on the initial data. Moreover we provide a considerably simplified and unified treatment of these results and provide also complete proofs for large data. The paper is also intended as an introduction to and survey of current research in the very active area of nonlinear wave equations. The key ingredients throughout the survey are the use of the null structure of the equations we consider and, intimately tied to it, bilinear estimates.
Bilinear Estimates and Applications to Nonlinear Wave Equations
Active galactic nuclei (AGN) are generally accepted to be powered by the release of gravitational energy in a compact accretion disk surrounding a massive black hole. Such disks are also necessary to collimate powerful radio jets seen in some AGN. The unifying classification schemes for AGN further propose that differences in their appearance can be attributed to the opacity of the accreting material, which may obstruct our view of the central region of some systems. The popular model for the obscuring medium is a parsec-scale disk of dense molecular gas, although evidence for such disks has been mostly indirect, as their angular size is much smaller than the resolution of conventional telescopes. Here we report the first direct images of a pc-scale disk of ionised gas within the nucleus of NGC 1068, the archetype of obscured AGN. The disk is viewed nearly edge-on, and individual clouds within the ionised disk are opaque to high-energy radiation, consistent with the unifying classification scheme. In projection, the disk and AGN axes align, from which we infer that the ionised gas disk traces the outer regions of the long-sought inner accretion disk.
A direct image of the obscuring disk surrounding an active galactic nucleus
This note completely describes the bounded or compact Riemann-Stieltjes integral operators $T_g$ acting between the weighted Bergman space pairs $(A^p_\alpha,A^q_\beta)$ in terms of particular regularities of the holomorphic symbols $g$ on the open unit ball of $\Bbb C^n$.
Riemann-Stieltjes Integral Operators between Weighted Bergman Spaces
In time hopping impulse radio, $N_f$ pulses of duration $T_c$ are transmitted for each information symbol. This gives rise to two types of processing gain: (i) pulse combining gain, which is a factor $N_f$, and (ii) pulse spreading gain, which is $N_c=T_f/T_c$, where $T_f$ is the mean interval between two subsequent pulses. This paper investigates the trade-off between these two types of processing gain in the presence of timing jitter. First, an additive white Gaussian noise (AWGN) channel is considered and approximate closed form expressions for bit error probability are derived for impulse radio systems with and without pulse-based polarity randomization. Both symbol-synchronous and chip-synchronous scenarios are considered. The effects of multiple-access interference and timing jitter on the selection of optimal system parameters are explained through theoretical analysis. Finally, a multipath scenario is considered and the trade-off between processing gains of a synchronous impulse radio system with pulse-based polarity randomization is analyzed. The effects of the timing jitter, multiple-access interference and inter-frame interference are investigated. Simulation studies support the theoretical results.
The Trade-off between Processing Gains of an Impulse Radio UWB System in the Presence of Timing Jitter
We show that the state-independent violation of inequalities for noncontextual hidden variable theories introduced in [Phys. Rev. Lett. 101, 210401 (2008)] is universal, i.e., occurs for any quantum mechanical system in which noncontextuality is meaningful. We describe a method to obtain state-independent violations for any system of dimension d > 2. This universality proves that, according to quantum mechanics, there are no "classical" states.
Universality of state-independent violation of correlation inequalities for noncontextual theories
We elucidate a magnetic mass effect on a sphaleron energy that is crucial for baryon number preservation needed for successful electroweak baryogenesis. It is found that the sphaleron energy increases in response to the magnetic mass. As an application, we study the sphaleron energy and electroweak phase transition with the magnetic mass in a two-Higgs-doublet model. Although the magnetic mass can screen the gauge boson loops, it relaxes a baryon number preservation criterion more effectively, broadening the baryogenesis-possible region. Our findings would be universal in any new physics models as long as the gauge sector is common to the standard model.
Magnetic mass effect on the sphaleron energy
The global existence of strong solution to the initial-boundary value problem of the three-dimensional compressible viscoelastic fluids near equilibrium is established in a bounded domain. Uniform estimates in $W^{1,q}$ with $q>3$ on the density and deformation gradient are also obtained. All the results apply to the two-dimensional case.
The initial-boundary value problem for the compressible viscoelastic fluids
The Orion-Eridanus superbubble, formed by the nearby Orion high mass star-forming region, contains multiple bright H$\alpha$ filaments on the Eridanus side of the superbubble. We examine the implications of the H$\alpha$ brightnesses and sizes of these filaments, the Eridanus filaments. We find that either the filaments must be highly elongated along the line of sight or they cannot be equilibrium structures illuminated solely by the Orion star-forming region. The Eridanus filaments may, instead, have formed when the Orion-Eridanus superbubble encountered and compressed a pre-existing, ionized gas cloud, such that the filaments are now out of equilibrium and slowly recombining.
The Origin of Ionized Filaments Within the Orion-Eridanus Superbubble
Since the quark-gluon plasma (QGP) reveals some obvious similarities to the well-known electromagnetic plasma (EMP), an accumulated knowledge on EMP can be used in the QGP studies. After discussing similarities and differences of the two systems, we present theoretical tools which are used to describe the plasmas. The tools include: kinetic theory, hydrodynamic approach and diagrammatic perturbative methods. We consider collective phenomena in the plasma with a particular emphasis on instabilities which crucially influence temporal evolution of the system. Finally, properties of strongly coupled plasma are discussed.
What Do Electromagnetic Plasmas Tell Us about Quark-Gluon Plasma?
We give a construction of two-sided invariant metrics on free products (possibly with amalgamation) of groups with two-sided invariant metrics and, under certain conditions, on HNN extensions of such groups. Our approach is similar to the Graev's construction of metrics on free groups over pointed metric spaces
Graev metrics on free products and HNN extensions
The Hilbert space of the unitary irreducible representations of a Lie group that is a quantum dynamical group are identified with the quantum state space. Hermitian representation of the algebra are observables. The eigenvalue equations for the representation of the set of Casimir invariant operators define the field equations of the system. A general class of dynamical groups are semidirect products K *s N for which the representations are given by Mackey's theory. The homogeneous group K must be a subgroup of the automorphisms of the normal group N. The archetype dynamical group is the Poincare group. The field equations defined by the representations of the Casimir operators define the basic equations of physics; Klein-Gordon, Dirac, Maxwell and so forth. This paper explores a more general dynamical group candidate that is also a semi-direct product but where the 'translation' normal subgroup N is now the Heisenberg group. The relevant automorphisms of the Heisenberg group are the symplectic group. This together with the requirement for an orthogonal metric leads to the pseudo-unitary group for the homogeneous group K. The physical meaning and motivation of this group, called the quaplectic group, is presented and the Hermitian irreducible representations of the algebra are determined. As with the Poincare group, choice of the group defines the Hilbert space of representations that are identified with quantum particle states. The field equations that are the eigenvalue equations for the representation of the Casimir operators, are obtained and investigated. The theory embodies the Born reciprocity principle and a new relativity principle.
Canonically relativistic quantum mechanics: Casimir field equations of the quaplectic group
A zero-dimensional (volume-averaged) and a pseudo-one-dimensional (plug-flow) model are developed to investigate atmospheric-pressure plasma jet devices operated with He, He/O$_2$, He/N$_2$ and He/N$_2$/O$_2$ mixtures. The models are coupled with the Boltzmann equation under the two-term approximation to self-consistently calculate the electron energy distribution function (EEDF). The simulation results are verified against spatially resolved model calculations and validated against a wide variety of measurement data. The nitric oxide (NO) concentration is thoroughly characterized for a variation of the gas mixture ratio, helium flow rate and absorbed power. The concentration measurements at low power are better captured by the simulation with a larger hypothetical "effective" rate coefficient value for the reactive quenching N$_2$(A$^3\Sigma$,B$^3\Pi$) + O($^3$P) $\to$ NO + N($^2$D). This suggests that the NO production at low power is also covered by the species N$_2$(A$^3\Sigma$,B$^3\Pi$;v>0) and multiple higher N$_2$ electronically excited states instead of only N$_2$(A$^3{\Sigma}$,B$^3{\Pi}$;v=0) in this quenching. Furthermore, the O($^3$P) density measurements under the same operation conditions are also better predicted by the simulations with a consideration of the aforementioned hypothetical rate coefficient value. It is found that the contribution of the vibrationally excited nitrogen molecules N$_2$(v$\geqslant$13) to the net NO formation rate gains more significance at higher power. The vibrational distribution functions (VDFs) of O$_2$(v<41) and N$_2$(v<58) are investigated. The sensitivity of the zero-dimensional model with respect to a variation of the VDF resolutions, wall reaction probabilities and synthetic air impurity levels is presented. The simulated plasma properties are sensitive to the variation especially for a feeding gas mixture containing nitrogen.
Zero-dimensional and pseudo-one-dimensional models of atmospheric-pressure plasma jet in binary and ternary mixtures of oxygen and nitrogen with helium background
We introduce a method to compute particle detector transition probability in spacetime regions of general curved spacetimes provided that the curvature is not above a maximum threshold. In particular we use this method to compare the response of two detectors, one in a spherically symmetric gravitational field and the other one in Rindler spacetime to compare the Unruh and Hawking effects: We study the vacuum response of a detector freely falling through a stationary cavity in a Schwarzschild background as compared with the response of an equivalently accelerated detector traveling through an inertial cavity in the absence of curvature. We find that as we set the cavity in further radiuses from the black hole, the thermal radiation measured by the detector approaches the quantity recorded by the detector in Rindler background showing in which way and at what scales the equivalent principle is recovered in the Hawking-Unruh effect. I.e. when the Hawking effect in a Schwarzschild background becomes equivalent to the Unruh effect in Rindler spacetime.
Cavities in curved spacetimes: the response of particle detectors
During the last years the authors have studied the number of limit cycles of several families of planar vector fields. The common tool has been the use of an extended version of the celebrated Bendixson-Dulac Theorem. The aim of this work is to present an unified approach of some of these results, together with their corresponding proofs. We also provide several applications.
Some Applications of the Extended Bendixson-Dulac Theorem
209Bi nuclear magnetic resonance (NMR) spectroscopy was employed to probe potential spin-orbit effects on orbital diamagnetism in YPtBi and YPdBi crystals. The observed opposite sign and temperature dependent magnitude of 209Bi NMR shifts of both crystals reveal experimental signatures of enhanced orbital diamagnetism induced by spin-orbit interactions. This investigation indicates that NMR isotropic shifts might be beneficial in search of interesting spin-electronic phases among a vast number of topological nontrivial half-Heusler semimetals.
NMR evidence for enhanced orbital diamagnetism in topologically nontrivial half-Heusler semimetals
A mixed dominating set is a collection of vertices and edges that dominates all vertices and edges of a graph. We study the complexity of exact and parameterized algorithms for \textsc{Mixed Dominating Set}, resolving some open questions. In particular, we settle the problem's complexity parameterized by treewidth and pathwidth by giving an algorithm running in time $O^*(5^{tw})$ (improving the current best $O^*(6^{tw})$), as well as a lower bound showing that our algorithm cannot be improved under the Strong Exponential Time Hypothesis (SETH), even if parameterized by pathwidth (improving a lower bound of $O^*((2 - \varepsilon)^{pw})$). Furthermore, by using a simple but so far overlooked observation on the structure of minimal solutions, we obtain branching algorithms which improve both the best known FPT algorithm for this problem, from $O^*(4.172^k)$ to $O^*(3.510^k)$, and the best known exponential-time exact algorithm, from $O^*(2^n)$ and exponential space, to $O^*(1.912^n)$ and polynomial space.
New Algorithms for Mixed Dominating Set
The standard circuit model for quantum computation presumes the ability to directly perform gates between arbitrary pairs of qubits, which is unlikely to be practical for large-scale experiments. Power-law interactions with strength decaying as $1/r^\alpha$ in the distance $r$ provide an experimentally realizable resource for information processing, whilst still retaining long-range connectivity. We leverage the power of these interactions to implement a fast quantum fanout gate with an arbitrary number of targets. Our implementation allows the quantum Fourier transform (QFT) and Shor's algorithm to be performed on a $D$-dimensional lattice in time logarithmic in the number of qubits for interactions with $\alpha \le D$. As a corollary, we show that power-law systems with $\alpha \le D$ are difficult to simulate classically even for short times, under a standard assumption that factoring is classically intractable. Complementarily, we develop a new technique to give a general lower bound, linear in the size of the system, on the time required to implement the QFT and the fanout gate in systems that are constrained by a linear light cone. This allows us to prove an asymptotically tighter lower bound for long-range systems than is possible with previously available techniques.
Implementing a Fast Unbounded Quantum Fanout Gate Using Power-Law Interactions
Nanomaterials have much improved properties compared to their bulk counterparts, which promotes them as ideal material for applications in various industries. Among the various nanomaterials, different nanoallotropes of carbon, namely fullerene, carbon nanotubes, and graphene, are the most important as indicated by the fact that their discoverers gained prestigious awards such as Nobel Prize or Kavli Prize. Carbon forms different nano-allotropes by varying the nature of orbital hybridization. Since all nanoallotropes of carbon possess exotic physical and chemical properties, they are extensively used in different applications, especially in the electronic industry.
Application of carbon nanomaterials in the electronic industry
We present a reduced order model for three dimensional unsteady pressure-driven flows in micro-channels of variable cross-section. This fast and accurate model is valid for long channels, but allows for large variations in the channel's cross-section along the axis. It is based on an asymptotic expansion of the governing equations in the aspect ratio of the channel. A finite Fourier transform in the plane normal to the flow direction is used to solve for the leading order axial velocity. The corresponding pressure and transverse velocity are obtained via a hybrid analytic-numerical scheme based on recursion. The channel geometry is such that one of the transverse velocity components is negligible, and the other component, in the plane of variation of channel height, is obtained from combination of the corresponding momentum equation and the continuity equations, assuming a low degree polynomial Ansatz of the pressure forcing. A key feature of the model is that it puts no restriction on the time dependence of the pressure forcing, in terms of shape and frequency, as long as the advective component of the inertia term is small. This is a major departure from many previous expositions which assume harmonic forcing. The model reveals to be accurate for a wide range of parameters and is two orders of magnitude faster than conventional three dimensional CFD simulations.
Simplified models for unsteady three-dimensional flows in slowly varying microchannels
This monograph is centred at the intersection of three mathematical topics, that are theoretical in nature, yet with motivations and relevance deep rooted in applications: the linear inverse problems on abstract, in general infinite-dimensional Hilbert space; the notion of Krylov subspace associated to an inverse problem, i.e., the cyclic subspace built upon the datum of the inverse problem by repeated application of the linear operator; the possibility to solve the inverse problem by means of Krylov subspace methods, namely projection methods where the finite-dimensional truncation is made with respect to the Krylov subspace and the approximants converge to an exact solution to the inverse problem.
Inverse linear problems on Hilbert space and their Krylov solvability
We introduce a generalization of the usual vacuum energy, called `deformed vacuum energy', which yields anisotropic pressure whilst preserving zero inertial mass density. It couples to the shear scalar in a unique way, such that they together emulate the canonical scalar field with an arbitrary potential. This opens up a new avenue by reconsidering cosmologies based on canonical scalar fields, along with a bonus that the kinetic term of the scalar field is replaced by an observable, the shear scalar. We further elaborate the aspects of this approach in the context of dark energy.
Scalar field emulator via anisotropically deformed vacuum energy: Application to dark energy
The Hausdorff $\delta$-dimension game was introduced by Das, Fishman, Simmons and {Urba{\'n}ski} and shown to characterize sets in $\mathbb{R}^d$ having Hausdorff dimension $\leq \delta$. We introduce a variation of this game which also characterizes Hausdorff dimension and for which we are able to prove an unfolding result similar to the basic unfolding property for the Banach-Mazur game for category. We use this to derive a number of consequences for Hausdorff dimension. We show that under $\mathsf{AD}$ any wellordered union of sets each of which has Hausdorff dimension $\leq \delta$ has dimension $\leq \delta$. We establish a continuous uniformization result for Hausdorff dimension. The unfolded game also provides a new proof that every $\boldsymbol{\Sigma}^1_1$ set of Hausdorff dimension $\geq \delta$ contains a compact subset of dimension $\geq \delta'$ for any $\delta'<\delta$, and this result generalizes to arbitrary sets under $\mathsf{AD}$.
Hausdorff Dimension Regularity Properties and Games
We focus on interval algorithms for computing guaranteed enclosures of the solutions of constrained global optimization problems where differential constraints occur. To solve such a problem of global optimization with nonlinear ordinary differential equations, a branch and bound algorithm can be used based on guaranteed numerical integration methods. Nevertheless, this kind of algorithms is expensive in term of computation. Defining new methods to reduce the number of branches is still a challenge. Bisection based on the smear value is known to be often the most efficient heuristic for branching algorithms. This heuristic consists in bisecting in the coordinate direction for which the values of the considered function change the most "rapidly". We propose to define a smear-like function using the sensitivity function obtained from the differentiation of ordinary differential equation with respect to parameters. The sensitivity has been already used in validated simulation for local optimization but not as a bisection heuristic. We implement this heuristic in a branch and bound algorithm to solve a problem of global optimization with nonlinear ordinary differential equations. Experiments show that the gain in term of number of branches could be up to 30%.
Sensitivity-based Heuristic for Guaranteed Global Optimization with Nonlinear Ordinary Differential Equations
A detachment of a hypergraph is formed by splitting each vertex into one or more subvertices, and sharing the incident edges arbitrarily among the subvertices. For a given edge-colored hypergraph $\scr F$, we prove that there exists a detachment $\scr G$ such that the degree of each vertex and the multiplicity of each edge in $\scr F$ (and each color class of $\scr F$) are shared fairly among the subvertices in $\scr G$ (and each color class of $\scr G$, respectively). Let $(\lambda_1\dots,\lambda_m) K^{h_1,\dots,h_m}_{p_1,\dots,p_n}$ be a hypergraph with vertex partition $\{V_1,\dots, V_n\}$, $|V_i|=p_i$ for $1\leq i\leq n$ such that there are $\lambda_i$ edges of size $h_i$ incident with every $h_i$ vertices, at most one vertex from each part for $1\leq i\leq m$ (so no edge is incident with more than one vertex of a part). We use our detachment theorem to show that the obvious necessary conditions for $(\lambda_1\dots,\lambda_m) K^{h_1,\dots,h_m}_{p_1,\dots,p_n}$ to be expressed as the union $\scr G_1\cup \ldots \cup\scr G_k$ of $k$ edge-disjoint factors, where for $1\leq i\leq k$, $\scr G_i$ is $r_i$-regular, are also sufficient. Baranyai solved the case of $h_1=\dots=h_m$, $\lambda_1=\dots,\lambda_m=1$, $p_1=\dots=p_m$, $r_1=\dots =r_k$. Berge and Johnson, (and later Brouwer and Tijdeman, respectively) considered (and solved, respectively) the case of $h_i=i$, $1\leq i\leq m$, $p_1=\dots=p_m=\lambda_1=\dots=\lambda_m=r_1=\dots =r_k=1$. We also extend our result to the case where each $\scr G_i$ is almost regular.
Detachments of Hypergraphs I: The Berge-Johnson Problem
The book covers the certain questions of nuclear physics and nuclear astrophysics of light atomic nuclei and their processes at low and ultralow energies. Some methods of calculation of nuclear characteristics of the thermonuclear processes considered in nuclear astrophysics are given here. The obtained results are directly applicable to the solution of certain nuclear astrophysics problems in the field of description of the thermonuclear processes in the Sun, the stars and the Universe. The book is based on the results of approximately three-four tens of scientific papers generally published in recent five-seven years and consists of three sections. The first of them covers the description of the general methods of calculation of certain nuclear characteristics for the bound states or the continuum of quantum particles. The second section deals with the methods, the computer programs and the results of the phase shift analysis of elastic scattering in the p3He, p6Li, p12C, n12C, p13C, 4He4He and 4He12C nuclear systems at low and ultralow energies. The results obtained on the basis of three-body models of certain light atomic nuclei are given in the third section, notably the 7Li, 9Be and 11B nuclei which are used for examination of the conjugated intercluster potentials determined on the basis of the phase shifts of elastic scattering and using then in the nuclear astrophysics problems connected with the description of the thermonuclear processes in the Universe. The book will be useful for advanced students, postgraduate students and PhD doctoral candidates in universities and research institutes in the field of astrophysics and nuclear physics. The book is presented in Russian with a few inserts into English.
Selected methods of nuclear astrophysics
This work is concerned about introducing two new 1D and 3D confined potentials and present their solutions using the Tridiagonal Representation Approach (TRA). The wavefunction is written as a series in terms of square integrable basis functions which are expressed in terms of Jacobi polynomials. Moreover, the expansion coefficients are written in terms of new orthogonal polynomials that were introduced recently by Alhaidari, the analytical properties of these polynomials are yet to be derived. Moreover, we have computed the numerical eigen-energies for both potentials by considering specific choices of the potential parameters.
Exact solvability of two new 3D and 1D nonrelativistic potentials within the TRA framework
The critical thermodynamics of an $MN$-component field model with cubic anisotropy relevant to the phase transitions in certain crystals with complicated ordering is studied within the four-loop $\ve$ expansion using the minimal subtraction scheme. Investigation of the global structure of RG flows for the physically significant cases M=2, N=2 and M=2, N=3 shows that the model has an anisotropic stable fixed point with new critical exponents. The critical dimensionality of the order parameter is proved to be equal to $N_c^C=1.445(20)$, that is exactly half its counterpart in the real hypercubic model.
Critical thermodynamics of three-dimensional MN-component field model with cubic anisotropy from higher-loop \epsilon expansion
This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuo-motor policies (modular networks) where each module is trained independently. Benefiting from weighted losses, the fine-tuning method significantly improves the performance of the policies for a robotic planar reaching task.
Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination
This paper studies the choice number and paint number of the lexicographic product of graphs. We prove that if $G$ has maximum degree $\Delta$, then for any graph $H$ on $n$ vertices $ch(G[H]) \le (4\Delta+2)(ch(H) +\log_2 n)$ and $\chi_P(G[H]) \le (4\Delta+2) (\chi_P(H)+ \log_2 n)$.
Choosability and paintability of the lexicographic product of graphs
Topological data analysis uses tools from topology -- the mathematical area that studies shapes -- to create representations of data. In particular, in persistent homology, one studies one-parameter families of spaces associated with data, and persistence diagrams describe the lifetime of topological invariants, such as connected components or holes, across the one-parameter family. In many applications, one is interested in working with features associated with persistence diagrams rather than the diagrams themselves. In our work, we explore the possibility of learning several types of features extracted from persistence diagrams using neural networks.
Can neural networks learn persistent homology features?
We present BVRI and unfiltered Clear light curves of 70 stripped-envelope supernovae (SESNe), observed between 2003 and 2020, from the Lick Observatory Supernova Search (LOSS) follow-up program. Our SESN sample consists of 19 spectroscopically normal SNe~Ib, two peculiar SNe Ib, six SN Ibn, 14 normal SNe Ic, one peculiar SN Ic, ten SNe Ic-BL, 15 SNe IIb, one ambiguous SN IIb/Ib/c, and two superluminous SNe. Our follow-up photometry has (on a per-SN basis) a mean coverage of 81 photometric points (median of 58 points) and a mean cadence of 3.6d (median of 1.2d). From our full sample, a subset of 38 SNe have pre-maximum coverage in at least one passband, allowing for the peak brightness of each SN in this subset to be quantitatively determined. We describe our data collection and processing techniques, with emphasis toward our automated photometry pipeline, from which we derive publicly available data products to enable and encourage further study by the community. Using these data products, we derive host-galaxy extinction values through the empirical colour evolution relationship and, for the first time, produce accurate rise-time measurements for a large sample of SESNe in both optical and infrared passbands. By modeling multiband light curves, we find that SNe Ic tend to have lower ejecta masses and lower ejecta velocities than SNe~Ib and IIb, but higher $^{56}$Ni masses.
The Lick Observatory Supernova Search follow-up program: photometry data release of 70 stripped-envelope supernovae
While recent AI-based draping networks have significantly advanced the ability to simulate the appearance of clothes worn by 3D human models, the handling of multi-layered garments remains a challenging task. This paper presents a model for draping multi-layered garments that are unseen during the training process. Our proposed framework consists of three stages: garment embedding, single-layered garment draping, and untangling. The model represents a garment independent to its topological structure by mapping it onto the $UV$ map of a human body model, allowing for the ability to handle previously unseen garments. In the single-layered garment draping phase, the model sequentially drapes all garments in each layer on the body without considering interactions between them. The untangling phase utilizes a GNN-based network to model the interaction between the garments of different layers, enabling the simulation of complex multi-layered clothing. The proposed model demonstrates strong performance on both unseen synthetic and real garment reconstruction data on a diverse range of human body shapes and poses.
Multi-Layered Unseen Garments Draping Network
Purpose: The lung nodules localization in CT scan images is the most difficult task due to the complexity of the arbitrariness of shape, size, and texture of lung nodules. This is a challenge to be faced when coming to developing different solutions to improve detection systems. the deep learning approach showed promising results by using convolutional neural network (CNN), especially for image recognition and it's one of the most used algorithm in computer vision. Approach: we use (CNN) building blocks based on YOLOv5 (you only look once) to learn the features representations for nodule detection labels, in this paper, we introduce a method for detecting lung cancer localization. Chest X-rays and low-dose computed tomography are also possible screening methods, When it comes to recognizing nodules in radiography, computer-aided diagnostic (CAD) system based on (CNN) have demonstrated their worth. One-stage detector YOLOv5 trained on 280 annotated CT SCAN from a public dataset LIDC-IDRI based on segmented pulmonary nodules. Results: we analyze the predictions performance of the lung nodule locations, and demarcates the relevant CT scan regions. In lung nodule localization the accuracy is measured as mean average precision (mAP). the mAP takes into account how well the bounding boxes are fitting the labels as well as how accurate the predicted classes for those bounding boxes, the accuracy we got 92.27%. Conclusion: this study was to identify the nodule that were developing in the lungs of the participants. It was difficult to find information on lung nodules in medical literature.
Identification of lung nodules CT scan using YOLOv5 based on convolution neural network
The current in response to a bias in certain two-dimensional electron gas (2DEG), can have a nonzero transverse component under a finite magnetic field applied in the plane where electrons are confined. This phenomenon known as planar Hall effect is accompanied by dependencies of both the longitudinal and the transverse components of the current on the angle $\phi$ between the bias direction and the magnetic field. In 2DEG with spin orbit coupling (SOC) such as oxide interfaces, this effect has been experimentally witnessed. Further, a fourfold oscillation in longitudinal resistance as a function of $\phi$ has also been observed. Motivated by these, we perform scattering theory calculations on a 2DEG with SOC in presence of an in-plane magnetic field connected to two dimensional leads on either sides to obtain longitudinal and transverse conductances. We find that the longitudinal conductance is $\pi$-periodic and the transverse conductance is $2\pi$-periodic in $\phi$. The magnitude of oscillation in transverse conductance with $\phi$ is enhanced in certain patches in $(\alpha,b)$-plane where $\alpha$ is the strength of SOC and $b$ is Zeeman energy due to magnetic field. The oscillation in transverse conductance with $\phi$ can be highly multi-fold for large values of $\alpha$ and $b$. The highly multi-fold oscillations of transverse conductance are due to Fabry-P\'erot type interference between the modes in the central region as backed by its length dependent features. Our study establishes that SOC in a material is sufficient to observe planar Hall effect without the need for anisotropic magnetic ordering or nontrivial topology of the bandstructure.
Finite transverse conductance and anisotropic magnetoconductance under an applied in-plane magnetic field in two-dimensional electron gases with strong spin-orbit coupling
Resonant asymptotics of wakefield excitation in plasma by non-resonant sequence of relativistic electron bunches has been numerically simulated. It has been shown that in resonant asymptotics at optimal parameters the wakefield is excited with the maximum growth rate and the amplitude of the excited wakefield is the largest.
Optimal Resonant Asymptotics of Wakefield Excitation in Plasma by Non-resonant Sequence of Relativistic Electron Bunches
A search for the production of three massive vector bosons in proton-proton collisions is performed using data at $\sqrt{s} = 13$ TeV recorded with the ATLAS detector at the Large Hadron Collider in the years 2015-2017, corresponding to an integrated luminosity of $79.8$ fb$^{-1}$. Events with two same-sign leptons $\ell$ (electrons or muons) and at least two reconstructed jets are selected to search for $WWW \to \ell \nu \ell \nu qq$. Events with three leptons without any same-flavour opposite-sign lepton pairs are used to search for $WWW \to \ell \nu \ell\nu \ell \nu$, while events with three leptons and at least one same-flavour opposite-sign lepton pair and one or more reconstructed jets are used to search for $WWZ \to \ell \nu qq \ell \ell$. Finally, events with four leptons are analysed to search for $WWZ \to \ell \nu \ell \nu \ell \ell$ and $WZZ \to qq \ell \ell \ell \ell$. Evidence for the joint production of three massive vector bosons is observed with a significance of 4.1 standard deviations, where the expectation is 3.1 standard deviations.
Evidence for the production of three massive vector bosons with the ATLAS detector
We evaluate radiation pressure from starlight on dust as a feedback mechanism in star-forming galaxies by comparing the luminosity and flux of star-forming systems to the dust Eddington limit. The linear LFIR--L'HCN correlation provides evidence that galaxies may be regulated by radiation pressure feedback. We show that star-forming galaxies approach but do not dramatically exceed Eddington, but many systems are significantly below Eddington, perhaps due to the "intermittency" of star formation. Better constraints on the dust-to-gas ratio and the CO- and HCN-to-H2 conversion factors are needed to make a definitive assessment of radiation pressure as a feedback mechanism.
Radiation Pressure Feedback in Galaxies
In this paper, we consider the (upper) semigroup envelope, i.e. the least upper bound, of a given family of linear Feller semigroups. We explicitly construct the semigroup envelope and show that, under suitable assumptions, it yields viscosity solutions to abstract Hamilton-Jacobi-Bellman-type partial differential equations related to stochastic optimal control problems arising in the field of Robust Finance. We further derive conditions for the existence of a Markov process under a nonlinear expectation related to the semigroup envelope for the case where the state space is locally compact. The procedure is then applied to numerous examples, in particular, nonlinear PDEs that arise from control problems for infinite dimensional Ornstein-Uhlenbeck and L\'evy processes.
Upper envelopes of families of Feller semigroups and viscosity solutions to a class of nonlinear Cauchy problems
Gaussian covariance graph models encode marginal independence among the components of a multivariate random vector by means of a graph $G$. These models are distinctly different from the traditional concentration graph models (often also referred to as Gaussian graphical models or covariance selection models) since the zeros in the parameter are now reflected in the covariance matrix $\Sigma$, as compared to the concentration matrix $\Omega =\Sigma^{-1}$. The parameter space of interest for covariance graph models is the cone $P_G$ of positive definite matrices with fixed zeros corresponding to the missing edges of $G$. As in Letac and Massam [Ann. Statist. 35 (2007) 1278--1323], we consider the case where $G$ is decomposable. In this paper, we construct on the cone $P_G$ a family of Wishart distributions which serve a similar purpose in the covariance graph setting as those constructed by Letac and Massam [Ann. Statist. 35 (2007) 1278--1323] and Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272--1317] do in the concentration graph setting. We proceed to undertake a rigorous study of these "covariance" Wishart distributions and derive several deep and useful properties of this class.
Wishart distributions for decomposable covariance graph models
We present a novel machine-learning approach to estimate selection effects in gravitational-wave observations. Using techniques similar to those commonly employed in image classification and pattern recognition, we train a series of neural-network classifiers to predict the LIGO/Virgo detectability of gravitational-wave signals from compact-binary mergers. We include the effect of spin precession, higher-order modes, and multiple detectors and show that their omission, as it is common in large population studies, tends to overestimate the inferred merger rate in selected regions of the parameter space. Although here we train our classifiers using a simple signal-to-noise ratio threshold, our approach is ready to be used in conjunction with full pipeline injections, thus paving the way toward including actual distributions of astrophysical and noise triggers into gravitational-wave population analyses.
Gravitational-wave selection effects using neural-network classifiers
Electrical energy storage systems (EESSs) with high energy density and power density are essential for the effective miniaturization of future electronic devices. Among different EESSs available in the market, dielectric capacitors relying on swift electronic and ionic polarization-based mechanisms to store and deliver energy already demonstrate high power densities. However, different intrinsic and extrinsic contributions to energy dissipations prevent ceramic-based dielectric capacitors from reaching high recoverable energy density levels. Interestingly, relaxor ferroelectric-based dielectric capacitors, because of their low remnant polarization, show relatively high energy density and thus display great potential for applications requiring high energy density properties. Here, some of the main strategies to improve the energy density properties of perovskite lead-free relaxor systems are reviewed. This includes (i) chemical modification at different crystallographic sites, (ii) chemical additives that do not target lattice sites and (iii) novel processing approaches dedicated to bulk ceramics, thick and thin films, respectively. Recent advancements are summarized concerning the search for relaxor materials with superior energy density properties and the appropriate choice of both composition and processing route to match various needs in the application. Finally, future trends in computationally-aided materials design are presented.
Strategies to Improve the Energy Storage Properties of Perovskite Lead-Free Relaxor Ferroelectrics: A Review
Curie's principle states that "when effects show certain asymmetry, this asymmetry must be found in the causes that gave rise to them". We demonstrate that symmetry equivariant neural networks uphold Curie's principle and can be used to articulate many symmetry-relevant scientific questions into simple optimization problems. We prove these properties mathematically and demonstrate them numerically by training a Euclidean symmetry equivariant neural network to learn symmetry-breaking input to deform a square into a rectangle and to generate octahedra tilting patterns in perovskites.
Finding Symmetry Breaking Order Parameters with Euclidean Neural Networks
Federated learning aims to train models collaboratively across different clients without the sharing of data for privacy considerations. However, one major challenge for this learning paradigm is the {\em data heterogeneity} problem, which refers to the discrepancies between the local data distributions among various clients. To tackle this problem, we first study how data heterogeneity affects the representations of the globally aggregated models. Interestingly, we find that heterogeneous data results in the global model suffering from severe {\em dimensional collapse}, in which representations tend to reside in a lower-dimensional space instead of the ambient space. Moreover, we observe a similar phenomenon on models locally trained on each client and deduce that the dimensional collapse on the global model is inherited from local models. In addition, we theoretically analyze the gradient flow dynamics to shed light on how data heterogeneity result in dimensional collapse for local models. To remedy this problem caused by the data heterogeneity, we propose {\sc FedDecorr}, a novel method that can effectively mitigate dimensional collapse in federated learning. Specifically, {\sc FedDecorr} applies a regularization term during local training that encourages different dimensions of representations to be uncorrelated. {\sc FedDecorr}, which is implementation-friendly and computationally-efficient, yields consistent improvements over baselines on standard benchmark datasets. Code: https://github.com/bytedance/FedDecorr.
Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning
This third paper,devoted to global correspondences of Langlands,bears more particularly on geometric-shifted bilinear correspondences on mixed (bi)motives generated under the action of the products,right by left,of differential elliptic operators.The mathematical frame,underlying these correspondences,deals with the categories of the Suslin-Voevodsky mixed (bi)motives and of the Chow mixed (bi)motives which are both in one-to-one correspondence with the functional representation spaces of the shifted algebraic bilinear semigroups.A bilinear holomorphic and supercuspidal spectral representation of an elliptic bioperator is then developed.
n-Dimensional geometric-shifted global bilinear correspondences of Langlands on mixed motives III
We study soliton interaction in the Modified Kadomtsev-Petviashvili-(II) equation (MKP-(II)) using the totally non-negative Grassmannian. One constructs the multi-kink soliton of MKP equation using the $\tau$-function and the Binet-Cauchy formula, and then investigates the interaction between kink solitons and line solitons. Especially, Y-type kink-soliton resonance, O-type kink soliton and P-type kink soliton of X-shape are investigated. Their amplitudes of interaction are computed after choosing appropriate phases.
Soliton Interaction In the Modified Kadomtsev-Petviashvili-(II) Equation
The halo mass function from N-body simulations of collisionless matter is generally used to retrieve cosmological parameters from observed counts of galaxy clusters. This neglects the observational fact that the baryonic mass fraction in clusters is a random variable that, on average, increases with the total mass (within an overdensity of 500). Considering a mock catalog that includes tens of thousands of galaxy clusters, as expected from the forthcoming generation of surveys, we show that the effect of a varying baryonic mass fraction will be observable with high statistical significance. The net effect is a change in the overall normalization of the cluster mass function and a milder modification of its shape. Our results indicate the necessity of taking into account baryonic corrections to the mass function if one wants to obtain unbiased estimates of the cosmological parameters from data of this quality. We introduce the formalism necessary to accomplish this goal. Our discussion is based on the conditional probability of finding a given value of the baryonic mass fraction for clusters of fixed total mass. Finally, we show that combining information from the cluster counts with measurements of the baryonic mass fraction in a small subsample of clusters (including only a few tens of objects) will nearly optimally constrain the cosmological parameters.
Counts of galaxy clusters as cosmological probes: the impact of baryonic physics
We study the perturbative renormalizability of chiral two pion exchange for the singlet and triplet channels within effective field theory, provided that the one pion exchange piece of the interaction has been fully iterated. We determine the number of counterterms/subtractions needed in order to obtain finite results when the cut-off is removed, resulting in three counterterms for the singlet channel and six for the triplet. The results show that chiral two pion exchange can be treated perturbatively up to a center-of-mass momentum of k ~ 200-300 MeV in the singlet channel and k ~ 300-400 in the triplet.
Perturbative renormalizability of chiral two pion exchange in nucleon-nucleon scattering
We discuss decoherence in discrete-time quantum walks in terms of a phenomenological model that distinguishes spin and spatial decoherence. We identify the dominating mechanisms that affect quantum walk experiments realized with neutral atoms walking in an optical lattice. From the measured spatial distributions, we determine with good precision the amount of decoherence per step, which provides a quantitative indication of the quality of our quantum walks. In particular, we find that spin decoherence is the main mechanism responsible for the loss of coherence in our experiment. We also find that the sole observation of ballistic instead of diffusive expansion in position space is not a good indicator for the range of coherent delocalization. We provide further physical insight by distinguishing the effects of short and long time spin dephasing mechanisms. We introduce the concept of coherence length in the discrete-time quantum walk, which quantifies the range of spatial coherences. Unexpectedly, we find that quasi-stationary dephasing does not modify the local properties of the quantum walk, but instead affects spatial coherences. For a visual representation of decoherence phenomena in phase space, we have developed a formalism based on a discrete analogue of the Wigner function. We show that the effects of spin and spatial decoherence differ dramatically in momentum space.
Decoherence Models for Discrete-Time Quantum Walks and their Application to Neutral Atom Experiments
We are concerned with the uniqueness of the asymptotic behavior of strong solutions of the initial-boundary value problem for general semilinear parabolic equations by the asymptotic behavior of these strong solutions on a finite set of an entire domain. More precisely, if the asymptotic behavior of a strong solution is known on an appropriate finite set, then the asymptotic behavior of a strong solution itself is entirely determined in a domain. We prove the above property by the energy method.
Determining nodes for semilinear parabolic equations
How well do multisymplectic discretisations preserve travelling wave solutions? To answer this question, the 5-point central difference scheme is applied to the semi-linear wave equation. A travelling wave ansatz leads to an ordinary difference equation, whose solutions correspond to the numerical scheme and can be compared to travelling wave solutions of the corresponding PDE. For a discontinuous nonlinearity the difference equation is solved exactly. For continuous nonlinearities the difference equation is solved using a Fourier series, and resonances that depend on the grid-size are revealed for a smooth nonlinearity. In general, the infinite dimensional functional equation, which must be solved to get the travelling wave solutions, is intractable, but backward error analysis proves to be a powerful tool, as it provides a way to study the solutions of the equation through a simple ODE that describes the behavior to arbitrarily high order. A general framework for using backward error analysis to analyze preservation of travelling waves for other equations and discretisations is presented. Then, the advantages that multisymplectic methods have over other methods are briefly highlighted.
Travelling wave solutions of multisymplectic discretizations of semi-linear wave equations
The experimental status and theoretical uncertainties of the Cabibbo--Kobayashi--Maskawa (CKM) matrix describing the charge-changing weak transitions between quarks with charges -1/3 ($d, s, b$) and 2/3 ($u, c, t$) are reviewed. Some recent methods of obtaining phases of CKM elements are described.
Status of the CKM Matrix
A 50 m**2 RPC carpet was operated at the YangBaJing Cosmic Ray Laboratory (Tibet) located 4300 m a.s.l. The performance of RPCs in detecting Extensive Air Showers was studied. Efficiency and time resolution measurements at the pressure and temperature conditions typical of high mountain laboratories, are reported.
High Altitude test of RPCs for the ARGO-YBJ experiment
We consider the selfconsistent semiclassical Maxwell--Schr\"odinger system for the solid state laser which consists of the Maxwell equations coupled to $N\sim 10^{20}$ Schr\"odinger equations for active molecules. The system contains time-periodic pumping and a weak dissipation. We introduce the corresponding Poincar\'e map $P$ and consider the differential $DP(Y^0)$ at suitable stationary state $Y^0$. We conjecture that the {\it stable laser action} is due to the {\it parametric resonance} (PR) which means that the maximal absolute value of the corresponding multipliers is greater than one. The multipliers are defined as eigenvalues of $DP(Y^0)$. The PR makes the stationary state $Y^0$ highly unstable, and we suppose that this instability maintains the {\it coherent laser radiation}. We prove that the spectrum Spec$\,DP(Y^0)$ is approximately symmetric with respect to the unit circle $|\mu|=1$ if the dissipation is sufficiently small. More detailed results are obtained for the Maxwell--Bloch system. We calculate the corresponding Poincar\'e map $P$ by successive approximations. The key role in calculation of the multipliers is played by the sum of $N$ positive terms arising in the second-order approximation for the total current. This fact can be interpreted as the {\it synchronization of molecular currents} in all active molecules, which is provisionally in line with the role of {\it stimulated emission} in the laser action. The calculation of the sum relies on probabilistic arguments which is one of main novelties of our approach. Other main novelties are i) the calculation of the differential $DP(Y^0)$ in the "Hopf representation", ii) the block structure of the differential, and iii) the justification of the "rotating wave approximation" by a new estimate for the averaging of slow rotations.
On parametric resonance in the laser action
In this paper, we prove a gap result for a locally conformally flat complete non-compact Riemannian manifold with bounded non-negative Ricci curvature and a scalar curvature average condition. We show that if it has positive Green function, then it is flat. This result is proved by setting up new global Yamabe flow. Other extensions related to bounded positive solutions to a schrodinger equation are also discussed.
Gap Theorems for Locally Conformally Flat Manifolds
Fluid instabilities like Rayleigh-Taylor,Richtmyer-Meshkov and Kelvin-Helmholtz instability can occur in a wide range of physical phenomenon from astrophysical context to Inertial Confinement Fusion(ICF).Using Layzer's potential flow model, we derive the analytical expressions of growth rate of bubble and spike for ideal magnetized fluid in R-T and R-M cases. In presence of transverse magnetic field the R-M and R-T instability are suppressed or enhanced depending on the direction of magnetic pressure and hydrodynamic pressure. Again the interface of two fluid may oscillate if both the fluids are conducting. However the magnetic field has no effect in linear case.
Development of Richtmyer-Meshkov and Rayleigh-Taylor Instability in presence of magnetic field
We describe resistive states of the system combining two types of orderings - superconducting and ferromagnetic one. It is shown that in the presence of magnetization dynamics such systems become inherently dissipative and in principle cannot sustain any amount of the superconducting current because of the voltage generated by the magnetization dynamics. We calculate generic current-voltage characteristics of a superconductor/ferromagnet/superconductor Josephson junction with an unpinned domain wall and find the low-current resistance associated with the domain wall motion. We suggest the finite slope of Shapiro steps as the characteristic feature of the regime with domain wall oscillations driven by the ac external current flowing through the junction.
Resistive state of SFS Josephson junctions in the presence of moving domain walls
We study the non-equilibrium steady-state phase transition from probe brane holography in $z=2$ Schr\"odinger spacetime. Concerning differential conductivity, a phase transition could occur in the conductor state. Considering constant current operator as the external field and the conductivity as an order parameter, we derive scaling behavior of order parameter near the critical point. We explore the critical exponents of the non-equilibrium phase transition in two different Schr\"odinger spacetimes, which originated $1)$ from supergravity, and $2)$ from AdS blackhole in the light-cone coordinates. Interestingly, we will see that even at the zero charge density, in our first geometry, the dynamical critical exponent of $z=2$ has a major effect on the critical exponents.
Non-Equilibrium Critical Phenomena From Probe Brane Holography in Schr\"odinger Spacetime
We present a generalizable novel view synthesis method where it is possible to modify the visual appearance of rendered views to match a target weather or lighting condition without any scene specific training. Our method is based on a generalizable transformer architecture and is trained on synthetically generated scenes under different appearance conditions. This allows for rendering novel views in a consistent manner for 3D scenes that were not included in the training set, along with the ability to (i) modify their appearance to match the target condition and (ii) smoothly interpolate between different conditions. Experiments on real and synthetic scenes show that our method is able to generate 3D consistent renderings while making realistic appearance changes, including qualitative and quantitative comparisons with applying 2D style transfer methods on rendered views. Please refer to our project page for video results: https://ava-nvs.github.io/
Adjustable Visual Appearance for Generalizable Novel View Synthesis
VIRGOHI21 is an HI source detected in the Virgo Cluster survey of Davies et al. (2004) which has a neutral hydrogen mass of 10^8 M_solar and a velocity width of Delta V_20 = 220 km/s. From the Tully-Fisher relation, a galaxy with this velocity width would be expected to be 12th magnitude or brighter; however deep CCD imaging has failed to turn up a counterpart down to a surface-brightness level of 27.5 B mag/sq. arcsec. The HI observations show that it is extended over at least 16 kpc which, if the system is bound, gives it a minimum dynamical mass of ~10^11 M_solar and a mass to light ratio of M_dyn/L_B > 500 M_solar/L_solar. If it is tidal debris then the putative parents have vanished; the remaining viable explanation is that VIRGOHI21 is a dark halo that does not contain the expected bright galaxy. This object was found because of the low column density limit of our survey, a limit much lower than that achieved by all-sky surveys such as HIPASS. Further such sensitive surveys might turn up a significant number of the dark matter halos predicted by Dark Matter models.
A Dark Hydrogen Cloud in the Virgo Cluster
It is shown that some regular solutions in 5D Kaluza-Klein gravity may have interesting properties if one from the parameters is in the Planck region. In this case the Kretschman metric invariant runs up to a maximal reachable value in nature, i.e. practically the metric becomes singular. This observation allows us to suppose that in this situation the problems with such soft singularity will be much easier resolved in the future quantum gravity then by the situation with the ordinary hard singularity (Reissner-Nordstr\"om singularity, for example). It is supposed that the analogous consideration can be applied for the avoiding the hard singularities connected with the gauge charges.
Soft singularity and the fundamental length
Meaning is defined by the company it keeps. However, company is two-fold: It's based on the identity of tokens and also on their position (topology). We argue that a position-centric perspective is more general and useful. The classic MLM and CLM objectives in NLP are easily phrased as position predictions over the whole vocabulary. Adapting the relative position encoding paradigm in NLP to create relative labels for self-supervised learning, we seek to show superior pre-training judged by performance on downstream tasks.
Relative Position Prediction as Pre-training for Text Encoders
Quantum stabilizer codes (QSCs) suffer from a low quantum coding rate, since they have to recover the quantum bits (qubits) in the face of both bit-flip and phase-flip errors. In this treatise, we conceive a low-complexity concatenated quantum turbo code (QTC) design exhibiting a high quantum coding rate. The high quantum coding rate is achieved by combining the quantum-domain version of short-block codes (SBCs) also known as single parity check (SPC) codes as the outer codes and quantum unity-rate codes (QURCs) as the inner codes. Despite its design simplicity, the proposed QTC yields a near-hashing-bound error correction performance. For instance, compared to the best half-rate QTC known in the literature, namely the QIrCC-QURC scheme, which operates at the distance of $D = 0.037$ from the quantum hashing bound, our novel QSBC-QURC scheme can operate at the distance of $D = 0.029$. It is worth also mentioning that this is the first instantiation of QTCs capable of adjusting the quantum encoders according to the quantum coding rate required for mitigating the Pauli errors given the different depolarizing probabilities of the quantum channel.
Near-Hashing-Bound Multiple-Rate Quantum Turbo Short-Block Codes