text
stringlengths
133
1.92k
summary
stringlengths
24
228
We report that the surface conductivity of Na$_{2}$IrO$_3$ crystal is extremely tunable by high energy Ar plasma etching and can be tuned from insulating to metallic with increasing etching time. Temperature dependent electrical transport for the metallic samples show signatures of first order phase transitions which are consistent with charge or spin density wave like phase transitions recently predicted theoretically. Additionally, grazing-incidence small-angle x-ray scattering (GISAXS) reveal that the room temperature surface structure of Na$_{2}$IrO$_3$ does not change after plasma etching.
Density wave like transport anomalies in surface doped Na$_{2}$IrO$_3$
Diamond Light Source (DLS), the UK synchrotron facility, attracts scientists from across the world to perform ground-breaking x-ray experiments. With over 3000 scientific users per year, vast amounts of data are collected across the experimental beamlines, with the highest volume of data collected during tomographic imaging experiments. A growing interest in tomography as an imaging technique, has led to an expansion in the range of experiments performed, in addition to a growth in the size of the data per experiment. Savu is a portable, flexible, scientific processing pipeline capable of processing multiple, n-dimensional datasets in serial on a PC, or in parallel across a cluster. Developed at DLS, and successfully deployed across the beamlines, it uses a modular plugin format to enable experiment-specific processing and utilises parallel HDF5 to remove RAM restrictions. The Savu design, described throughout this paper, focuses on easy integration of existing and new functionality, flexibility and ease of use for users and developers alike.
Savu: A Python-based, MPI Framework for Simultaneous Processing of Multiple, N-dimensional, Large Tomography Datasets
We propose a decoder-only language model, \textit{VoxtLM}, that can perform four tasks: speech recognition, speech synthesis, text generation, and speech continuation. VoxtLM integrates text vocabulary with discrete speech tokens from self-supervised speech features and uses special tokens to enable multitask learning. Compared to a single-task model, VoxtLM exhibits a significant improvement in speech synthesis, with improvements in both speech intelligibility from 28.9 to 5.6 and objective quality from 2.68 to 3.90. VoxtLM also improves speech generation and speech recognition performance over the single-task counterpart. VoxtLM is trained with publicly available data and training recipes and model checkpoints will be open-sourced to make fully reproducible work.
Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks
Single image blind deblurring is highly ill-posed as neither the latent sharp image nor the blur kernel is known. Even though considerable progress has been made, several major difficulties remain for blind deblurring, including the trade-off between high-performance deblurring and real-time processing. Besides, we observe that current single image blind deblurring networks cannot further improve or stabilize the performance but significantly degrades the performance when re-deblurring is repeatedly applied. This implies the limitation of these networks in modeling an ideal deblurring process. In this work, we make two contributions to tackle the above difficulties: (1) We introduce the idempotent constraint into the deblurring framework and present a deep idempotent network to achieve improved blind non-uniform deblurring performance with stable re-deblurring. (2) We propose a simple yet efficient deblurring network with lightweight encoder-decoder units and a recurrent structure that can deblur images in a progressive residual fashion. Extensive experiments on synthetic and realistic datasets prove the superiority of our proposed framework. Remarkably, our proposed network is nearly 6.5X smaller and 6.4X faster than the state-of-the-art while achieving comparable high performance.
Deep Idempotent Network for Efficient Single Image Blind Deblurring
The diagonalization of the metrical Hamiltonian of a scalar field with an arbitrary coupling with a curvature in N-dimensional homogeneous isotropic space is performed. The energy spectrum of the corresponding quasiparticles is obtained. The energies of the quasiparticles corresponding to the diagonal form of the canonical Hamiltonian are calculated. The modified energy-momentum tensor with the following properties is constructed. It coincides with the metrical energy-momentum tensor for conformal scalar field. Under its diagonalization the energies of relevant particles of a nonconformal field coincide to the oscillator frequencies and the density of such particles created in a nonstationary metric is finite. It is shown that the Hamiltonian calculated with the modified energy-momentum tensor can be constructed as a canonical Hamiltonian under the special choice of variables.
Nonconformal Scalar Field in a Homogeneous Isotropic Space and the Method of Hamiltonian Diagonalization
We designed, fabricated and tested an optical hybrid that supports an octave of bandwidth (900-1800 nm) and below 4-dB insertion loss using multiplane light conversion. Measured phase errors are below 3-degree across a measurement bandwidth of 390 nm.
Ultrabroadband Polarization Insensitive Hybrid using Multiplane Light Conversion
We calculate radiative-recoil corrections of order $\alpha^2(Z\alpha)(m/M)E_F$ to hyperfine splitting in muonium generated by the diagrams with electron and muon polarization loops. These corrections are enhanced by the large logarithm of the electron-muon mass ratio. The leading logarithm cubed and logarithm squared contributions were obtained a long time ago. The single-logarithmic and nonlogarithmic contributions calculated here improve the theory of hyperfine splitting, and affect the value of the electron-muon mass ratio extracted from the experimental data on the muonium hyperfine splitting.
Two-Loop Polarization Contributions to Radiative-Recoil Corrections to Hyperfine Splitting in Muonium
We state and prove a corrected version of a theorem of Singerman, which relates the existence of symmetries (anticonformal involutions) of a quasiplatonic Riemann surface $\mathcal S$ (one uniformised by a normal subgroup $N$ of finite index in a cocompact triangle group $\Delta$) to the properties of the group $G=\Delta/N$. We give examples to illustrate the revised necessary and sufficient conditions for the existence of symmetries, and we relate them to properties of the associated dessins d'enfants, or hypermaps.
Symmetries of quasiplatonic Riemann surfaces
We present a new pipelined approach to compute all pairs shortest paths (APSP) in a directed graph with nonnegative integer edge weights (including zero weights) in the CONGEST model in the distributed setting. Our deterministic distributed algorithm computes shortest paths of distance at most $\Delta$ for all pairs of vertices in at most $2 n \sqrt{\Delta} + 2n$ rounds, and more generally, it computes h-hop shortest paths for k sources in $2\sqrt{nkh} + n + k$ rounds. The algorithm is simple, and it has some novel features and a nontrivial analysis.It uses only the directed edges in the graph for communication. This algorithm can be used as a base within asymptotically faster algorithms that match or improve on the current best deterministic bound of $\tilde{O}(n^{3/2})$ rounds for this problem when edge weights are $O(n)$ or shortest path distances are $\tilde{O}(n^{3/2})$.
A Deterministic Distributed Algorithm for Weighted All Pairs Shortest Paths Through Pipelining
Functional programming languages use garbage collection for heap memory management. Ideally, garbage collectors should reclaim all objects that are dead at the time of garbage collection. An object is dead at an execution instant if it is not used in future. Garbage collectors collect only those dead objects that are not reachable from any program variable. This is because they are not able to distinguish between reachable objects that are dead and reachable objects that are live. In this paper, we describe a static analysis to discover reachable dead objects in programs written in first-order, eager functional programming languages. The results of this technique can be used to make reachable dead objects unreachable, thereby allowing garbage collectors to reclaim more dead objects.
Liveness of Heap Data for Functional Programs
Experimentally several charged axial-vector hidden-charm states were reported. Within the framework of the color-magnetic interaction, we have systematically considered the mass spectrum of the hidden-charm and hidden-bottom tetraquark states. It is impossible to accommodate all the three charged states $Z_c(3900)$, $Z_c(4025)$ and $Z_c(4200)$ within the axial vector tetraquark spectrum simultaneously. Not all these three states are tetraquark candidates. Moreover, the eigenvector of the chromomagnetic interaction contains valuable information of the decay pattern of the tetraquark states. The dominant decay mode of the lowest axial vector tetraquark state is $J/\psi \pi$ while its $D^*\bar{D}$ and $\bar{D}^*D^*$ modes are strongly suppressed, which is in contrast with the fact that the dominant decay mode of $Z_c(3900)$ and $Z_c(4025)$ is $\bar{D}D^*$ and $\bar{D}^*D^*$ respectively. We emphasize that all the available experimental information indicates that $Z_c(4200)$ is a very promising candidate of the lowest axial vector hidden-charm tetraquark state.
Hidden-Charm Tetraquarks and Charged Zc States
Next generation virtual assistants are envisioned to handle multimodal inputs (e.g., vision, memories of previous interactions, in addition to the user's utterances), and perform multimodal actions (e.g., displaying a route in addition to generating the system's utterance). We introduce Situated Interactive MultiModal Conversations (SIMMC) as a new direction aimed at training agents that take multimodal actions grounded in a co-evolving multimodal input context in addition to the dialog history. We provide two SIMMC datasets totalling ~13K human-human dialogs (~169K utterances) using a multimodal Wizard-of-Oz (WoZ) setup, on two shopping domains: (a) furniture (grounded in a shared virtual environment) and, (b) fashion (grounded in an evolving set of images). We also provide logs of the items appearing in each scene, and contextual NLU and coreference annotations, using a novel and unified framework of SIMMC conversational acts for both user and assistant utterances. Finally, we present several tasks within SIMMC as objective evaluation protocols, such as Structural API Prediction and Response Generation. We benchmark a collection of existing models on these SIMMC tasks as strong baselines, and demonstrate rich multimodal conversational interactions. Our data, annotations, code, and models are publicly available.
Situated and Interactive Multimodal Conversations
To study stellar populations, it is common to combine chemical abundances from different spectroscopic surveys/studies where different setups were used. These inhomogeneities can lead us to inaccurate scientific conclusions. In this work, we studied one aspect of the problem: When deriving chemical abundances from high-resolution stellar spectra, what differences originate from the use of different radiative transfer codes?
How much can we trust high-resolution spectroscopic stellar chemical abundances?
Aims. We aim to search and characterize inflows and outflows of molecular gas in four ultraluminous infrared galaxies (ULIRGs) at $z\sim0.2-0.3$ and one distant QSO at $z=6.13$. Methods. We use Herschel PACS and ALMA Band 7 observations of the hydroxyl molecule (OH) line at rest-frame wavelength 119 $\mu$m which in absorption can provide unambiguous evidence for inflows or outflows of molecular gas in nuclear regions of galaxies. Our study contributes to double the number of OH observations of luminous systems at $z\sim0.2-0.3$, and push the search for molecular outflows based on the OH transition to $z\sim6$. Results. We detect OH high-velocity absorption wings in three of the four ULIRGs. In two cases, IRAS F20036-1547 and IRAS F13352+6402, the blueshifted absorption profiles indicate the presence of powerful and fast molecular gas outflows. Consistent with an inside-out quenching scenario, these outflows are depleting the central reservoir of molecular gas at a similar rate than the intense star formation activity. In the case of the starburst-dominated system IRAS 10091+4704, we detect an inverted P-Cygni profile that is unique among ULIRGs and indicates the presence of a fast ($\sim400$ km s$^{-1}$) inflow of molecular gas at a rate of $\sim100~M_{\odot}~{\rm yr}^{-1}$ towards the central region. Finally, we tentatively detect ($\sim3\sigma$) the OH doublet in absorption in the $z=6.13$ QSO ULAS J131911+095051. The OH feature is blueshifted with a median velocity that suggests the presence of a molecular outflow, although characterized by a modest molecular mass loss rate of $\sim200~M_{\odot}~{\rm yr}^{-1}$. This value is comparable to the small mass outflow rates found in the stacking of the [CII] spectra of other $z\sim6$ QSOs and suggests that ejective feedback in this phase of the evolution of ULAS J131911+095051 has subsided.
Molecular Gas Inflows and Outflows in Ultraluminous Infrared Galaxies at $z\sim0.2$ and one QSO at $z=6.1$
We develop a polynomial reduction procedure that transforms any gauge fixed CHY amplitude integrand for $n$ scattering particles into a $\sigma$-moduli multivariate polynomial of what we call the $\textit{standard form}$. We show that a standard form polynomial must have a specific $\textit{ladder type}$ monomial structure, which has finite size at any $n$, with highest multivariate degree given by $(n-3)(n-4)/2$. This set of monomials spans a complete basis for polynomials with rational coefficients in kinematic data on the support of scattering equations. Subsequently, at tree and one-loop level, we employ the global residue theorem to derive a prescription that evaluates any CHY amplitude by means of collecting simple residues at infinity only. The prescription is then applied explicitly to some tree and one-loop amplitude examples.
Polynomial reduction and evaluation of tree- and loop-level CHY amplitudes
We present results from an extensive analytic and numerical study of a two-dimensional model of a square array of ultrasmall Josephson junctions. We include the ultrasmall self and mutual capacitances of the junctions, for the same parameter ranges as those produced in the experiments. The model Hamiltonian studied includes the Josephson, $E_J$, as well as the charging, $E_C$, energies between superconducting islands. The corresponding quantum partition function is expressed in different calculationally convenient ways within its path-integral representation. The phase diagram is analytically studied using a WKB renormalization group (WKB-RG) plus a self-consistent harmonic approximation (SCHA) analysis, together with non-perturbative quantum Monte Carlo simulations. Most of the results presented here pertain to the superconductor to normal (S-N) region, although some results for the insulating to normal (I-N) region are also included. We find very good agreement between the WKB-RG and QMC results when compared to the experimental data. To fit the data, we only used the experimentally determined capacitances as fitting parameters. The WKB-RG analysis in the S-N region predicts a low temperature instability i.e. a Quantum Induced Transition (QUIT). We carefully simulations and carry out a finite size analysis of $T_{QUIT}$ as a function of the magnitude of imaginary time axis $L_\tau$. We find that for some relatively large values of $\alpha=E_C/E_J$ ($1\leq \alpha \leq 2.25)$, the $L_\tau\to\infty$ limit does appear to give a {\it non-zero} $T_{QUIT}$, while for $\alpha \ge 2.5$, $T_{QUIT}=0$. We use the SCHA to analytically understand the $L_\tau$ dependence of the QMC results with good agreement between them. Finally, we also carried out a WKB-RG analysis in the I-N region and found no evidence of a low temperature QUIT, up to lowest order in ${\alpha}^{-1}$
Critical properties of two-dimensional Josephson junction arrays with zero-point quantum fluctuations
Existing person re-identification models often have low generalizability, which is mostly due to limited availability of large-scale labeled data in training. However, labeling large-scale training data is very expensive and time-consuming, while large-scale synthetic dataset shows promising value in learning generalizable person re-identification models. Therefore, in this paper a novel and practical person re-identification task is proposed,i.e. how to use labeled synthetic dataset and unlabeled real-world dataset to train a universal model. In this way, human annotations are no longer required, and it is scalable to large and diverse real-world datasets. To address the task, we introduce a framework with high generalizability, namely DomainMix. Specifically, the proposed method firstly clusters the unlabeled real-world images and selects the reliable clusters. During training, to address the large domain gap between two domains, a domain-invariant feature learning method is proposed, which introduces a new loss,i.e. domain balance loss, to conduct an adversarial learning between domain-invariant feature learning and domain discrimination, and meanwhile learns a discriminative feature for person re-identification. This way, the domain gap between synthetic and real-world data is much reduced, and the learned feature is generalizable thanks to the large-scale and diverse training data. Experimental results show that the proposed annotation-free method is more or less comparable to the counterpart trained with full human annotations, which is quite promising. In addition, it achieves the current state of the art on several person re-identification datasets under direct cross-dataset evaluation.
DomainMix: Learning Generalizable Person Re-Identification Without Human Annotations
The SM-like Higgs boson with mass of 125 GeV discovered at the LHC is subject to a natural interpretation of electroweak symmetry breaking. As a successful theory in offering this naturalness, technicolor with a scalar doublet and two both $SU(3)_c$ and $SU(N_{TC})$ colored scalars, which is considered as a low-energy effective theory, is proposed after the discovery of SM-like Higgs boson. At present status, the model can be consistent with both the direct and indirect experimental limits. In particular, the consistency with precision electroweak measurements is realized by the colored scalars, which give rise to a large {\it negative} contribution to $S$ parameter. It is also promising to detect techni-pions and these colored scalars at the LHC.
Technicolor with Scalar Doublet After the Discovery of Higgs Boson
In this paper we propose a dual-time stepping scheme for the Smoothed Particle Hydrodynamics (SPH) method. Dual-time stepping has been used in the context of other numerical methods for the simulation of incompressible fluid flows. Here we provide a scheme that combines the entropically damped artificial compressibility (EDAC) along with dual-time stepping. The method is accurate, robust, and demonstrates up to seven times better performance than the standard weakly-compressible formulation. We demonstrate several benchmarks showing the applicability of the scheme. In addition, we provide a completely open source implementation and a reproducible manuscript.
Dual-Time Smoothed Particle Hydrodynamics for Incompressible Fluid Simulation
The oscillation spectrum of a one-dimensional vertical dust string formed inside a glass box on top of the lower electrode in a GEC reference cell was studied. A mechanism for creating a single vertical dust string is described. It is shown that the oscillation amplitudes, resonance frequencies, damping coefficients, and oscillation phases of the dust particles separate into two distinct groups. One group exhibits low damping coefficients, increasing amplitudes and decreasing resonance frequencies for dust particles closer to the lower electrode. The other group shows high damping coefficients but anomalous resonance frequencies and amplitudes. At low oscillation frequencies, the two groups are also separated by a {\pi}-phase difference. One possible cause for the difference in behavior between the two groups is discussed.
One-dimensional vertical dust strings in a glass box
This paper is concerned with the global existence of small solutions to pure-power nonlinear Schroedinger equations subject to radially symmetric data with critical regularity. Under radial symmetry we focus our attention on the case where the power of nonlinearity is somewhat smaller than the pseudoconformal power and the initial data belong to the scale-invariant homogeneous Sobolev space. In spite of the negative-order differentiability of initial data the nonlinear Schroedinger equation has global in time solutions provided that the initial data have the small norm. The key ingredient in the proof of this result is an effective use of global weighted smoothing estimates specific to radially symmetric solutions.
Nonlinear Schroedinger equations with radially symmetric data of critical regularity
Due to the outstanding capability for data generation, Generative Adversarial Networks (GANs) have attracted considerable attention in unsupervised learning. However, training GANs is difficult, since the training distribution is dynamic for the discriminator, leading to unstable image representation. In this paper, we address the problem of training GANs from a novel perspective, \emph{i.e.,} robust image classification. Motivated by studies on robust image representation, we propose a simple yet effective module, namely AdaptiveMix, for GANs, which shrinks the regions of training data in the image representation space of the discriminator. Considering it is intractable to directly bound feature space, we propose to construct hard samples and narrow down the feature distance between hard and easy samples. The hard samples are constructed by mixing a pair of training images. We evaluate the effectiveness of our AdaptiveMix with widely-used and state-of-the-art GAN architectures. The evaluation results demonstrate that our AdaptiveMix can facilitate the training of GANs and effectively improve the image quality of generated samples. We also show that our AdaptiveMix can be further applied to image classification and Out-Of-Distribution (OOD) detection tasks, by equipping it with state-of-the-art methods. Extensive experiments on seven publicly available datasets show that our method effectively boosts the performance of baselines. The code is publicly available at https://github.com/WentianZhang-ML/AdaptiveMix.
Improving GAN Training via Feature Space Shrinkage
Dynamical mean-field theory is used to study the magnetic instabilities and phase diagram of the double-exchange (DE) model with Hund's coupling J_H >0 in infinite dimensions. In addition to ferromagnetic (FM) and antiferromagnetic (AF) phases, the DE model supports a broad class of short-range ordered (SRO) states with extensive entropy and short-range magnetic order. For any site on the Bethe lattice, the correlation parameter q of a SRO state is given by the average q=<sin^2(theta_i/2)>, where theta_i is the angle between any spin and its neighbors. Unlike the FM (q=0) and AF (q=1) transitions, the transition temperature of a SRO state (T_{SRO}) with 0<q<1 cannot be obtained from the magnetic susceptibility. But a solution of the coupled Green's functions in the weak-coupling limit indicates that a SRO state always has a higher transition temperature than the AF for all fillings p<1 and even than the FM for 0.26\le p \le 0.39. For 0.39<p<0.73, where both the FM and AF phases are unstable for small J_H, a SRO phase has a non-zero T_{SRO} except close to p=0.5. As J_H increases, T_{SRO} eventually vanishes and the FM dominates. For small J_H, the T=0 phase diagram is greatly simplified by the presence of the SRO phase. A SRO phase is found to have lower energy than either the FM or AF phases for 0.26\le p<1. Phase separation (PS) disappears as J_H-->0 but appears for J_H\neq 0. For p near 1, PS occurs between an AF with p=1 and either a SRO or a FM phase. The stability of a SRO state at T=0 can be understood by examining the interacting DOS,which is gapped for any nonzero J_H in an AF but only when J_H exceeds a critical value in a SRO state.
Magnetic Instabilities and Phase Diagram of the Double-Exchange Model in Infinite Dimensions
Machine Learning has been a big success story during the AI resurgence. One particular stand out success relates to unsupervised learning from a massive amount of data, albeit much of it relates to one modality/type of data at a time. In spite of early assertions of the unreasonable effectiveness of data, there is increasing recognition of utilizing knowledge whenever it is available or can be created purposefully. In this paper, we focus on discussing the indispensable role of knowledge for deeper understanding of complex text and multimodal data in situations where (i) large amounts of training data (labeled/unlabeled) are not available or labor intensive to create, (ii) the objects (particularly text) to be recognized are complex (i.e., beyond simple entity-person/location/organization names), such as implicit entities and highly subjective content, and (iii) applications need to use complementary or related data in multiple modalities/media. What brings us to the cusp of rapid progress is our ability to (a) create knowledge, varying from comprehensive or cross domain to domain or application specific, and (b) carefully exploit the knowledge to further empower or extend the applications of ML/NLP techniques. Using the early results in several diverse situations - both in data types and applications - we seek to foretell unprecedented progress in our ability for deeper understanding and exploitation of multimodal data.
Knowledge will Propel Machine Understanding of Content: Extrapolating from Current Examples
The purpose of this paper is to investigate a relationship between the index of reducibility and the Chern coefficient for primary ideals. Therefore, the main result of this paper gives a characterization of a Cohen-Macaulay ring in terms of its the index of reducibility, its Cohen-Macaulay type, and the Chern coefficient for parameter ideals. As corollaries to the main theorem we obtained the characterizations of a Gorenstein ring in term of its Chern coefficient for parameter ideals.
The Chern Coefficient and Cohen-Macaulay rings
Deep learning models have achieved high performance on many tasks, and thus have been applied to many security-critical scenarios. For example, deep learning-based face recognition systems have been used to authenticate users to access many security-sensitive applications like payment apps. Such usages of deep learning systems provide the adversaries with sufficient incentives to perform attacks against these systems for their adversarial purposes. In this work, we consider a new type of attacks, called backdoor attacks, where the attacker's goal is to create a backdoor into a learning-based authentication system, so that he can easily circumvent the system by leveraging the backdoor. Specifically, the adversary aims at creating backdoor instances, so that the victim learning system will be misled to classify the backdoor instances as a target label specified by the adversary. In particular, we study backdoor poisoning attacks, which achieve backdoor attacks using poisoning strategies. Different from all existing work, our studied poisoning strategies can apply under a very weak threat model: (1) the adversary has no knowledge of the model and the training set used by the victim system; (2) the attacker is allowed to inject only a small amount of poisoning samples; (3) the backdoor key is hard to notice even by human beings to achieve stealthiness. We conduct evaluation to demonstrate that a backdoor adversary can inject only around 50 poisoning samples, while achieving an attack success rate of above 90%. We are also the first work to show that a data poisoning attack can create physically implementable backdoors without touching the training process. Our work demonstrates that backdoor poisoning attacks pose real threats to a learning system, and thus highlights the importance of further investigation and proposing defense strategies against them.
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
We give an upper bound of a Hamiltonian displacement energy of a unit disk cotangent bundle $D^*M$ in a cotangent bundle $T^*M$, when the base manifold $M$ is an open Riemannian manifold. Our main result is that the displacement energy is not greater than $C r(M)$, where $r(M)$ is the inner radius of $M$, and $C$ is a dimensional constant. As an immediate application, we study symplectic embedding problems of unit disk cotangent bundles. Moreover, combined with results in symplectic geometry, our main result shows the existence of short periodic billiard trajectories and short geodesic loops.
Displacement energy of unit disk cotangent bundles
In this paper we study the hyperbolic and parabolic strip deformations of ideal (possibly once-punctured) hyperbolic polygons whose vertices are decorated with horoballs. We prove that the interiors of their arc complexes parametrise the open convex set of all uniformly lengthening infinitesimal deformations of the decorated hyperbolic metrics on these surfaces, motivated by the work of Danciger-Gu\'eritaud-Kassel.
Strip deformations of decorated hyperbolic polygons
A new effective method for factorization of a class of nonrational $n\times n$ matrix-functions with \emph{stable partial indices} is proposed. The method is a generalization of the one recently proposed by the authors which was valid for the canonical factorization only. The class of considered matrices is motivated by problems originated from applications. The properties and details of the asymptotic procedure are illustrated by examples. The efficiency of the procedure is highlighted by numerical results.
Factorization of a class of matrix-functions with stable partial indices
We present an initial look at the FIR-radio correlation within the star-forming disk of M51a using {\it Spitzer} MIPS imaging, observed as part of the {\it Spitzer} Infrared Nearby Galaxies Survey (SINGS), and WSRT radio continuum data. At an estimated distance of 8.2 Mpc, we are able to probe the variations in the 70$\micron$/22cm ratios across the disk of M51a at a linearly projected scale of 0.75 kpc. We measure a dispersion of 0.191 dex, comparable to the measured dispersion of the global FIR-radio correlation. Such little scatter in the IR/radio ratios across the disk suggests that we have yet to probe physical scales small enough to observe a breakdown in the correlation. We also find that the global 70$\micron$/22cm ratio of M51a is 25% larger than the median value of the disk, suggesting that the brighter disk regions drive the globally measured ratio.
An Initial Look at the FIR-Radio Correlation within M51a using Spitzer
A wire that conducts an electric current will give rise to circular magnetic field (the \O{}rsted magnetic field), which can be calculated using the Maxwell-Ampere equation. For wires with diameters in the macroscopic scale, the Maxwell-Ampere equation is an established physical law that has can reproduce a range of experimental observations. A key implication of this equation is that the induction of \O{}rsted magnetic field is only a result of the displacement of charge. A possible microscopic origin of \O{}rsted magnetic induction was suggested in [J. Mag. Mag. Mat. 504, 166660 (2020)] (will be called the current magnetization hypothesis (CMH) thereupon). The present work establishes computationally, using simplified wire models, that the CMH reproduces the results of the Maxwell-Ampere equation for wires with a square cross section. I demonstrate that CMH could resolve the apparent contradiction between the observed induced magnetic field and that predicted by the Maxwell-Ampere equation in nanowires, as was reported in [Phys. Rev. B 99, 014436 (2019)]. The CMH shows that a possible reason for such contradiction is the presence of non-conductive surface layers in conductors.
The current magnetization hypothesis as a microscopic theory of the {\O}rsted magnetic field induction
The VLT-FLAMES Tarantula Survey has observed hundreds of O-type stars in the 30 Doradus region of the Large Magellanic Cloud (LMC). We study the properties of 105 apparently single O-type dwarfs. To determine stellar and wind parameters, we used the IACOB-GBAT package, an automatic procedure based on a large grid of atmospheric models calculated with the FASTWIND code. In addition to classical techniques, we applied the Bayesian BONNSAI tool to estimate evolutionary masses. We provide a new calibration of effective temperature vs. spectral type for O-type dwarfs in the LMC, based on our homogeneous analysis of the largest sample of such objects to date and including all spectral subtypes. Good agreement with previous results is found, although the sampling at the earliest subtypes could be improved. Rotation rates and helium abundances are studied in an evolutionary context. We find that most of the rapid rotators (vsini higher than 300 km/s ) in our sample have masses below 25 MSun and intermediate rotation-corrected gravities (log gc between 3.9 and 4.1). Such rapid rotators are scarce at higher gravities (i.e. younger ages) and absent at lower gravities (larger ages). This is not expected from theoretical evolutionary models, and does not appear to be due to a selection bias in our sample. We compare the estimated evolutionary and spectroscopic masses, finding a trend that the former is higher for masses below 20 MSun. This can be explained as a consequence of limiting our sample to the O-type stars, and we see no compelling evidence for a systematic mass discrepancy. For most of the stars in the sample we were unable to estimate the wind-strength parameter (hence mass-loss rates) reliably, particularly for objects with luminosity lower than logL/LSun about 5.1. Ultraviolet spectroscopy is needed to undertake a detailed investigation of the wind properties of these dwarfs.
The VLT-FLAMES Tarantula Survey XXVI: Properties of the O-dwarf population in 30 Doradus
Kuiper belt objects (KBOs) are thought to be the remnant of the early solar system, and their size distribution provides an opportunity to explore the formation and evolution of the outer solar system. In particular, the size distribution of kilometre-sized (radius = 1-10 km) KBO represents a signature of initial planetesimal sizes when planets form. These kilometre-sized KBOs are extremely faint, and it is impossible to detect them directly. Instead, monitoring of stellar occultation events is one possible way to discover these small KBOs. Hitherto, however, there has been no observational evidence for the occultation events by KBOs with radii of 1-10 km. Here we report the first detection of a single occultation event candidate by a KBO with a radius of $\sim$1.3 km, which is simultaneously provided by two low-cost small telescopes coupled with commercial CMOS cameras. From this detection, we conclude that a surface number density of KBOs with radii exceeding $\sim 1.2$ km is $\sim 6 \times 10^5 \ {\rm deg^{-2}}$. This surface number density favours a theoretical size distribution model with an excess signature at a radius of 1-2 km. If this is a true detection, this implies that planetesimals before their runaway growth phase grow into kilometre-sized objects in the primordial outer solar system and remain as a major population of the present-day Kuiper belt.
Amateur telescopes discover a kilometre-sized Kuiper belt object from stellar occultation
Consider the following system of double coupled Schr\"odinger equations arising from Bose-Einstein condensates etc., \begin{equation*} \left\{\begin{array}{l} -\Delta u + u =\mu_1 u^3 + \beta uv^2- \kappa v, -\Delta v + v =\mu_2 v^3 + \beta u^2v- \kappa u, u\neq0, v\neq0\ \hbox{and}\ u, v\in H^1(\R^N), \end{array} \right. \end{equation*}where $\mu_1, \mu_2$ are positive and fixed, $\kappa$ and $\beta$ are linear and nonlinear coupling parameters respectively. We first use critical point theory and Liouville type theorem to prove some existence and nonexistence results on the positive solutions of this system. Then using the positive and non-degenerate solution of the scalar equation $-\Delta\omega+\omega=\omega^3$, $\omega\in H_r^1(\R^N)$, we construct a synchronized solution branch to prove that for $\beta$ in certain range and fixed, there exist a series of bifurcations in product space $\R\times H^1_r(\R^N)\times H^1_r(\R^N)$ with parameter $\kappa$.
Existence and bifurcation of solutions for a double coupled system of Schrodinger equations
$p$-adic Hodge Theory is one of the most powerful tools in modern Arithmetic Geometry. In this survey, we will review $p$-adic Hodge Theory for algebraic varieties, present current developments in $p$-adic Hodge Theory for analytic varieties, and discuss some of its applications to problems in Number Theory. This is an extended version of a talk at the Jubilee Congress for the 100th anniversary of the Polish Mathematical Society, Krak\'ow, 2019.
Hodge Theory of $p$-adic varieties: a survey
In high temperature density functional theory simulations (from tens of eV to keV) the total number of Kohn-Sham orbitals is a critical quantity to get accurate results. To establish the relationship between the number of orbitals and the level of occupation of the highest orbital, we derived a model based on the electron gas properties at finite temperature. This model predicts the total number of orbitals required to reach a given level of occupation and thus a stipulated precision. Levels of occupation as low as 10-4, and below, must be considered to get converged results better than 1%, making high temperature simulations very time consuming beyond a few tens of eV. After assessing the predictions of the model against previous results and ABINIT minimizations, we show how the extended FPMD method of Zhang et al. [PoP 23 042707, 2016] allows to bypass these strong constraints on the number of orbitals at high temperature.
Requirements for very high temperature Kohn-Sham density functional simulations and how to bypass them
The maximally entangled mixed states of Munro, James, White, and Kwiat [Phys. Rev. A {\bf 64} (2001) 030302] are shown to exhibit interesting features vis a vis conditional entropic measures. The same happens with the Ishizaka and Hiroshima states [Phys. Rev. A {\bf 62} 022310 (2000)], whose entanglement-degree can not be increased by acting on them with logic gates. Special types of entangled states that do not violate classical entropic inequalities are seen to exist in the space of two qubits. Special meaning can be assigned to the Munro {\it et al.} special participation ratio of 1.8.
Maximally Entangled Mixed States and Conditional Entropies
We simplify and extend the construction of half-BPS solutions to 11-dimensional supergravity, with isometry superalgebra D(2,1;\gamma) \oplus D(2,1;\gamma). Their space-time has the form AdS_3 x S^3 x S^3 warped over a Riemann surface \Sigma. It describes near-horizon geometries of M2 branes ending on, or intersecting with, M5 branes along a common string. The general solution to the BPS equations is specified by a reduced set of data (\gamma, h, G), where \gamma is the real parameter of the isometry superalgebra, and h and G are functions on \Sigma whose differential equations and regularity conditions depend only on the sign of \gamma. The magnitude of \gamma enters only through the map of h, G onto the supergravity fields, thereby promoting all solutions into families parametrized by |\gamma|. By analyzing the regularity conditions for the supergravity fields, we prove two general theorems: (i) that the only solution with a 2-dimensional CFT dual is AdS_3 x S^3 x S^3 x R^2, modulo discrete identifications of the flat R^2, and (ii) that solutions with \gamma < 0 cannot have more than one asymptotic higher-dimensional AdS region. We classify the allowed singularities of h and G near the boundary of \Sigma, and identify four local solutions: asymptotic AdS_4/Z_2 or AdS_7' regions; highly-curved M5-branes; and a coordinate singularity called the "cap". By putting these "Lego" pieces together we recover all known global regular solutions with the above symmetry, including the self-dual strings on M5 for $\gamma < 0$, and the Janus solution for \gamma > 0, but now promoted to families parametrized by |\gamma|. We also construct exactly new regular solutions which are asymptotic to AdS_4/Z_2 for \gamma < 0, and conjecture that they are a different superconformal limit of the self-dual string.
M-theory Solutions Invariant under $D(2,1;\gamma) \oplus D(2,1;\gamma)$
In this paper is described the ANFIS Unit Neural Network, a deep neural network where each neuron is an independent ANFIS. Two use cases of this network are shown to test the capability of the network. (i) Classification of five imagined words. (ii) Incremental learning in the task of detecting Imagined Word Segments vs. Idle State Segments. In both cases, the proposed network outperforms the conventional methods. Additionally, is described a process of classification where instead of taking the whole instance as one example, each instance is decomposed into a set of smaller instances, and the classification is done by a majority vote over all the predictions of the set. The codes to build the AU-NN used in this paper, are available on the github repository https://github.com/tonahdztoro/AU_NN.
AU-NN: ANFIS Unit Neural Network
In this research, we present an end-to-end data-driven pipeline for determining the long-term stability status of objects within a given environment, specifically distinguishing between static and dynamic objects. Understanding object stability is key for mobile robots since long-term stable objects can be exploited as landmarks for long-term localisation. Our pipeline includes a labelling method that utilizes historical data from the environment to generate training data for a neural network. Rather than utilizing discrete labels, we propose the use of point-wise continuous label values, indicating the spatio-temporal stability of individual points, to train a point cloud regression network named LTS-NET. Our approach is evaluated on point cloud data from two parking lots in the NCLT dataset, and the results show that our proposed solution, outperforms direct training of a classification model for static vs dynamic object classification.
LTS-NET: End-to-end Unsupervised Learning of Long-Term 3D Stable objects
In this note we study the growth of \sum_{m=1}^M\frac1{\|m\alpha\|} as a function of M for different classes of \alpha\in[0,1). Hardy and Littlewood showed that for numbers of bounded type, the sum is \simeq M\log M. We give a very simple proof for it. Further we show the following for generic \alpha. For a non-decreasing function \phi tending to infinity, \limsup_{M\to\infty}\frac1{\phi(\log M)}\bigg[\frac1{M\log M}\sum_{m=1}^M\frac1{\|m\alpha\|}\bigg] is zero or infinity according as \sum\frac1{k\phi(k)} converges or diverges.
A Generalization of a Result of Hardy and Littlewood
Determining the physical Hilbert space is often considered the most difficult but crucial part of completing the quantization of a constrained system. In such a situation it can be more economical to use effective constraint methods, which are extended here to relativistic systems as they arise for instance in quantum cosmology. By side-stepping explicit constructions of states, such tools allow one to arrive much more feasibly at results for physical observables at least in semiclassical regimes. Several questions discussed recently regarding effective equations and state properties in quantum cosmology, including the spreading of states and quantum back-reaction, are addressed by the examples studied here.
Effective Constraints for Relativistic Quantum Systems
Information-centric networking proposals attract much attention in the ongoing search for a future communication paradigm of the Internet. Replacing the host-to-host connectivity by a data-oriented publish/subscribe service eases content distribution and authentication by concept, while eliminating threats from unwanted traffic at an end host as are common in today's Internet. However, current approaches to content routing heavily rely on data-driven protocol events and thereby introduce a strong coupling of the control to the data plane in the underlying routing infrastructure. In this paper, threats to the stability and security of the content distribution system are analyzed in theory and practical experiments. We derive relations between state resources and the performance of routers and demonstrate how this coupling can be misused in practice. We discuss new attack vectors present in its current state of development, as well as possibilities and limitations to mitigate them.
Backscatter from the Data Plane --- Threats to Stability and Security in Information-Centric Networking
Generic object detection has been immensely promoted by the development of deep convolutional neural networks in the past decade. However, in the domain shift circumstance, the changes in weather, illumination, etc., often cause domain gap, and thus performance drops substantially when detecting objects from one domain to another. Existing methods on this task usually draw attention on the high-level alignment based on the whole image or object of interest, which naturally, cannot fully utilize the fine-grained channel information. In this paper, we realize adaptation from a thoroughly different perspective, i.e., channel-wise alignment. Motivated by the finding that each channel focuses on a specific pattern (e.g., on special semantic regions, such as car), we aim to align the distribution of source and target domain on the channel level, which is finer for integration between discrepant domains. Our method mainly consists of self channel-wise and cross channel-wise alignment. These two parts explore the inner-relation and cross-relation of attention regions implicitly from the view of channels. Further more, we also propose a RPN domain classifier module to obtain a domain-invariant RPN network. Extensive experiments show that the proposed method performs notably better than existing methods with about 5% improvement under various domain-shift settings. Experiments on different task (e.g. instance segmentation) also demonstrate its good scalability.
Channel-wise Alignment for Adaptive Object Detection
This article is a short introduction to the theory of the groups of points of elliptic curves over finite fields. It is concerned with the elementary theory and practice of elliptic curves cryptography, the new generation of public key systems. The material and coverage are focused on the groups of points of elliptic curves and algebraic curves, but not exclusively.
Topic In Elliptic Curves Over Finite Fields: The Groups of Points
The photoproduction of large-p_T charged hadrons and of prompt photons is discussed, for the inclusive case and with an associated jet, using predictions from the NLO partonic Monte Carlo program EPHOX. Comparisons to recent HERA data are also shown.
Photoproduction of isolated photons, single hadrons and jets at NLO
We discuss design considerations and simulation results for IceRay, a proposed large-scale ultra-high energy (UHE) neutrino detector at the South Pole. The array is designed to detect the coherent Askaryan radio emission from UHE neutrino interactions in the ice, with the goal of detecting the cosmogenic neutrino flux with reasonable event rates. Operating in coincidence with the IceCube neutrino detector would allow complete calorimetry of a subset of the events. We also report on the status of a testbed IceRay station which incorporates both ANITA and IceCube technology and will provide year-round monitoring of the radio environment at the South Pole.
IceRay: An IceCube-centered Radio-Cherenkov GZK Neutrino Detector
$\alpha$-cluster correlations in the ground states of $^{12}$C and $^{16}$O are studied. Because of the $\alpha$ correlations, the intrinsic states of $^{12}$C and $^{16}$O have triangle and tetrahedral shapes, respectively. The deformations are regarded as spontaneous symmetry breaking of rotational invariance, and the resultant oscillating surface density is associated with a density wave (DW) state caused by the instability of Fermi surface with respect to a kind of $1p$-$1h$ correlations. To discuss the symmetry breaking between uniform density states and the oscillating density state, a schematic model of a few clusters on a Fermi gas core in a one-dimensional finite box was introduced. The model analysis suggests structure transitions from a Fermi gas state to a DW-like state via a BCS-like state, and to a Bose Einstein condensation (BEC)-like state depending on the cluster size relative to the box size. It was found that the oscillating density in the DW-like state originates in Pauli blocking effects.
alpha-cluster correlations and symmetry breaking in light nuclei
We introduce more generalizations of BCI, BCK and of Hilbert algebras, with proper examples, and show the hierarchies existing between all these algebras, old and new ones. Namely, we found thirty one new generalizations of BCI and BCK algebras and twenty generalizations of Hilbert algebras.
New generalizations of BCI, BCK and Hilbert algebras
Using a Sagdeev pseudopotential approach where the nonlinear structures are stationary in a comoving frame, the arbitrary or large amplitude dust-acoustic solitary waves and double layers have been studied in dusty plasmas containing warm positively charged dust and nonthermal distributed electrons and ions. Depending on the values of the critical Mach number, which varies with the plasma parameter, both supersonic and subsonic dust-acoustic solitary waves are found. It is found that our plasma system under consideration supports both positive and negative supersonic solitary waves, and only positive subsonic solitary waves and negative double layers. The parametric regimes for the existence of subsonic and supersonic dust-acoustic waves and how the polarity of solitary waves changes with plasma parameters are shown. It is observed that the solitary waves and double layers solution exist at the values of Mach number around its critical Mach number. The basic properties (amplitude, width, speed, etc.) of the solitary pulses and double layers are significantly modified by the plasma parameters (viz. ion to positive dust number density ratio, ion to electron temperature ratio, nonthermal parameter, positive dust temperature to ion temperature ratio, etc.). The applications of our present work in space environments (viz. cometary tails, Earth's mesosphere, Jupiter's magnetosphere, etc.) and laboratory devices, where nonthermal ions and electrons species along with positively charged dust species have been observed, are briefly discussed.
Large amplitude dust-acoustic solitary waves and double layers in nonthermal warm complex plasmas
Freezing sets and cold sets have been introduced as part of the theory of fixed points in digital topology. In this paper, we introduce a generalization of these notions, the limiting set, and examine properties of limiting sets.
Limiting Sets in Digital Topology
Non-standard Bose-Hubbard models can exhibit rich ground state phase diagrams, even when considering the one-dimensional limit. Using a self-consistent Gutzwiller diagonalisation approach, we study the mean-field ground state properties of a long-range interacting atomic gas in a one-dimensional optical lattice. We first confirm that the inclusion of long-range two-body interactions to the standard Bose-Hubbard model introduces density wave and supersolid phases. However, the introduction of pair and density-dependent tunnelling can result in new phases with two-site periodic density, single-particle transport and two-body transport order parameters. These staggered phases are potentially a mean-field signature of the known novel twisted superfluids found via a DMRG approach [PRA \textbf{94}, 011603(R) (2016)]. We also observe other unconventional phases, which are characterised by sign staggered order parameters between adjacent lattice sites.
Staggered Ground States in an Optical Lattice
The time parallel solution of optimality systems arising in PDE constraint optimization could be achieved by simply applying any time parallel algorithm, such as Parareal, to solve the forward and backward evolution problems arising in the optimization loop. We propose here a different strategy by devising directly a new time parallel algorithm, which we call ParaOpt, for the coupled forward and backward non-linear partial differential equations. ParaOpt is inspired by the Parareal algorithm for evolution equations, and thus is automatically a two-level method. We provide a detailed convergence analysis for the case of linear parabolic PDE constraints. We illustrate the performance of ParaOpt with numerical experiments both for linear and nonlinear optimality systems.
PARAOPT: A parareal algorithm for optimality systems
According to the classification scheme of the generalized random matrix ensembles, we present various kinds of concrete examples of the generalized ensemble, and derive their joint density functions in an unified way by one simple formula which was proved in [2]. Particular cases of these examples include Gaussian ensemble, chiral ensemble, new transfer matrix ensembles, circular ensemble, Jacobi ensembles, and so on. The associated integration formulae are also given, which are just many classical integration formulae or their variation forms.
A Generalization of Random Matrix Ensemble II: Concrete Examples and Integration Formulae
We study the asymptotic behaviour of the perturbative series in the heavy quark effective theory (HQET) using the $1/N_f$ expansion. We find that this theory suffers from an {\it ultraviolet} renormalon problem, corresponding to a non-Borel-summable behaviour of perturbation series in large orders, and leading to a principal nonperturbative ambiguity in its definition. This ambiguity is related to an {\it infrared} renormalon in the pole mass and can be understood as the necessity to include the residual mass term $\delta m$ in the definition of HQET, which must be considered as ambiguous (and possibly complex), and is required to cancel the ultraviolet renormalon singularity generated by the perturbative expansion. The formal status of $\delta m$ is thus identical to that of condensates in the conventional short-distance expansion of correlation functions in QCD. The status of the pole mass of a heavy quark, the operator product expansion for inclusive decays, and QCD sum rules in the HQET are discussed in this context.
Heavy Quark Effective Theory beyond Perturbation Theory: Renormalons, the Pole Mass and the Residual Mass Term
We consider coherent electromagnetic processes for colliders with short bunches, in particular the coherent bremsstrahlung (CBS). CBS is the radiation of one bunch particles in the collective field of the oncoming bunch. It can be a potential tool for optimizing collisions and for measuring beam parameters. A new simple and transparent method to calculate CBS is presented based on the equivalent photon approximation for this collective field. The results are applied to the $\phi$--factory $DA\Phi NE$. For this collider about $ 5 \cdot 10^{14} d E_\gamma / E_\gamma$ photons per second are expected in the photon energy $E_\gamma$ range from the visible light up to 25 eV.
The equivalent photon approximation for coherent processes at colliders
We study the precision with which the t-channel single top quark production cross section is expected to be measured in future LHC runs at 14 TeV. The single top final state has a lepton and neutrino from the top quark decay plus two jets, one of which is required to be b-tagged. This measurement is done in the context of the Snowmass 2013 study for the low-luminosity 14 TeV and the high-luminosity 14 TeV LHC as well as for high-luminosity 33 TeV LHC.
Single top quark cross section measurement in the t-channel at the high-luminosity LHC
Power systems are undergoing unprecedented transformations with the incorporation of larger amounts of renewable energy sources, distributed generation and demand response. All these changes, while potentially making power grids more responsive, efficient and resilient, also pose significant implementation challenges. In particular, operating the new power grid will require new tools and algorithms capable of predicting if the current state of the system is operationally safe. In this paper we study and generalize the so-called energy function as a tool to design algorithms to test if a high-voltage power transmission system is within the allowed operational limits. In the past the energy function technique was utilized primarily to access the power system transient stability. In this manuscript, we take a new look at energy functions and focus on an aspect that has previously received little attention: Convexity. We characterize the domain of voltage magnitudes and phases within which the energy function is convex. We show that the domain of the energy function convexity is sufficiently large to include most operationally relevant and practically interesting cases. We show how the energy function convexity can be used to analyze power flow equations, e.g. to certify solution uniqueness or non-existence within the domain of convexity. This and other useful features of the generalized energy function are described and illustrated on IEEE 14 and 118 bus models.
Convexity of Energy-Like Functions: Theoretical Results and Applications to Power System Operations
A class of languages C is perfect if it is closed under Boolean operations and the emptiness problem is decidable. Perfect language classes are the basis for the automata-theoretic approach to model checking: a system is correct if the language generated by the system is disjoint from the language of bad traces. Regular languages are perfect, but because the disjointness problem for CFLs is undecidable, no class containing the CFLs can be perfect. In practice, verification problems for language classes that are not perfect are often under-approximated by checking if the property holds for all behaviors of the system belonging to a fixed subset. A general way to specify a subset of behaviors is by using bounded languages (languages of the form w1* ... wk* for fixed words w1,...,wk). A class of languages C is perfect modulo bounded languages if it is closed under Boolean operations relative to every bounded language, and if the emptiness problem is decidable relative to every bounded language. We consider finding perfect classes of languages modulo bounded languages. We show that the class of languages accepted by multi-head pushdown automata are perfect modulo bounded languages, and characterize the complexities of decision problems. We also show that bounded languages form a maximal class for which perfection is obtained. We show that computations of several known models of systems, such as recursive multi-threaded programs, recursive counter machines, and communicating finite-state machines can be encoded as multi-head pushdown automata, giving uniform and optimal underapproximation algorithms modulo bounded languages.
A Perfect Model for Bounded Verification
This paper discusses a target tracking problem in which no dynamic mathematical model is explicitly assumed. A nonlinear filter based on the fuzzy If-then rules is developed. A comparison with a Kalman filter is made, and empirical results show that the performance of the fuzzy filter is better. Intensive simulations suggest that theoretical justification of the empirical results is possible.
A Fuzzy Logic Approach to Target Tracking
Observed temperatures of transiently accreting neutron stars in the quiescent state are generally believed to be supported by deep crustal heating, associated with non-equilibrium exothermic reactions in the crust. Traditionally, these reactions are studied by considering nuclear evolution governed by compression of the accreted matter. Here we show that this approach has a basic weakness, that is that in some regions of the inner crust the conservative forces, applied for matter components (nuclei and neutrons), are not in mechanical equilibrium. In principle the force balance can be restored by dissipative forces, however the required diffusion fluxes are of the same order as total baryon flux at Eddington accretion. We argue that redistribution of neutrons in the inner crust should be involved in realistic model of accreted crust.
Crucial role of neutron diffusion in the crust of accreting neutron stars
We study supersymmetric $AdS_4$ black holes in matter-coupled $N=3$ and $N=4$ gauged supergravities in four dimensions. In $N=3$ theory, we consider $N=3$ gauged supergravity coupled to three vector multiplets and $SO(3)\times SO(3)$ gauge group. The resulting gauged supergravity admits two $N=3$ supersymmetric $AdS_4$ vacua with $SO(3)\times SO(3)$ and $SO(3)$ symmetries. We find an $AdS_2\times H^2$ solution with $SO(2)\times SO(2)$ symmetry and an analytic solution interpolating between this geometry and the $SO(3)\times SO(3)$ symmetric $AdS_4$ vacuum. For $N=4$ gauged supergravity coupled to six vector multiplets with $SO(4)\times SO(4)$ gauge group, there exist four supersymmetric $AdS_4$ vacua with $SO(4)\times SO(4)$, $SO(4)\times SO(3)$, $SO(3)\times SO(4)$ and $SO(3)\times SO(3)$ symmetries. We find a number of $AdS_2\times S^2$ and $AdS_2\times H^2$ geometries together with the solutions interpolating between these geometries and all, but the $SO(3)\times SO(3)$, $AdS_4$ vacua. These solutions provide a new class of $AdS_4$ black holes with spherical and hyperbolic horizons dual to holographic RG flows across dimensions from $N=3,4$ SCFTs in three dimensions to superconformal quantum mechanics within the framework of four-dimensional gauged supergravity.
Supersymmetric AdS4 black holes from matter-coupled N=3,4 gauged supergravities
The couplings of the isosinglet axial-vector currents to the Eta and Eta' mesons are evaluated in a stable, model independent way by use of polynomial kernels in dispersion integrals. The corrections to the Gell-Mann-Oakes-Renner relation in the isoscalar channel are deduced. The derivative of the topological susceptibility at the origin is calculated taking into account instantons and instanton screening.
Eta-Eta' mixing and the derivative of the topological susceptibility at zero momentum transfer
We present the final results of the JLQCD calculation of the light hadron spectrum and quark masses with two flavors of dynamical quarks using the plaquette gauge action and fully $O(a)$-improved Wilson quark action at $\beta=5.2$. We observe that sea quark effects lead to a closer agreement of the strange meson and baryon masses with experiment and a reduction of quark masses by about 25.
Light hadron spectrum with two flavors of $O(a)$ improved dynamical quarks : final results from JLQCD
We study positive kernels on $X\times X$, where $X$ is a set equipped with an action of a group, and taking values in the set of $\mathcal A$-sesquilinear forms on a (not necessarily Hilbert) module over a $C^*$-algebra $\mathcal A$. These maps are assumed to be covariant with respect to the group action on $X$ and a representation of the group in the set of invertible ($\mathcal A$-linear) module maps. We find necessary and sufficient conditions for extremality of such kernels in certain convex subsets of positive covariant kernels. Our focus is mainly on a particular example of these kernels: a completely positive (CP) covariant map for which we obtain a covariant minimal dilation (or KSGNS construction). We determine the extreme points of the set of normalized covariant CP maps and, as a special case, study covariant quantum observables and instruments whose value space is a transitive space of a unimodular type-I group. As an example, we discuss the case of instruments that are covariant with respect to a square-integrable representation.
Covariant KSGNS construction and quantum instruments
Correlated imaging through atmospheric turbulence is studied, and the analytical expressions describing turbulence effects on image resolution are derived. Compared with direct imaging, correlated imaging can reduce the influence of turbulence to a certain extent and reconstruct high-resolution images. The result is backed up by numerical simulations, in which turbulence-induced phase perturbations are simulated by random phase screens inserting propagation paths.
Correlated imaging through atmospheric turbulence
The dual-pixel (DP) hardware works by splitting each pixel in half and creating an image pair in a single snapshot. Several works estimate depth/inverse depth by treating the DP pair as a stereo pair. However, dual-pixel disparity only occurs in image regions with the defocus blur. The heavy defocus blur in DP pairs affects the performance of matching-based depth estimation approaches. Instead of removing the blur effect blindly, we study the formation of the DP pair which links the blur and the depth information. In this paper, we propose a mathematical DP model which can benefit depth estimation by the blur. These explorations motivate us to propose an end-to-end DDDNet (DP-based Depth and Deblur Network) to jointly estimate the depth and restore the image. Moreover, we define a reblur loss, which reflects the relationship of the DP image formation process with depth information, to regularise our depth estimate in training. To meet the requirement of a large amount of data for learning, we propose the first DP image simulator which allows us to create datasets with DP pairs from any existing RGBD dataset. As a side contribution, we collect a real dataset for further research. Extensive experimental evaluation on both synthetic and real datasets shows that our approach achieves competitive performance compared to state-of-the-art approaches.
Dual Pixel Exploration: Simultaneous Depth Estimation and Image Restoration
This paper provides a simple procedure to fit generative networks to target distributions, with the goal of a small Wasserstein distance (or other optimal transport costs). The approach is based on two principles: (a) if the source randomness of the network is a continuous distribution (the "semi-discrete" setting), then the Wasserstein distance is realized by a deterministic optimal transport mapping; (b) given an optimal transport mapping between a generator network and a target distribution, the Wasserstein distance may be decreased via a regression between the generated data and the mapped target points. The procedure here therefore alternates these two steps, forming an optimal transport and regressing against it, gradually adjusting the generator network towards the target distribution. Mathematically, this approach is shown to minimize the Wasserstein distance to both the empirical target distribution, and also its underlying population counterpart. Empirically, good performance is demonstrated on the training and testing sets of the MNIST and Thin-8 data. The paper closes with a discussion of the unsuitability of the Wasserstein distance for certain tasks, as has been identified in prior work [Arora et al., 2017, Huang et al., 2017].
A gradual, semi-discrete approach to generative network training via explicit Wasserstein minimization
A great number of Internet of Things (IoT) and machine-to-machine (M2M) based applications require energy efficient, long range and low data rate wireless communication links. In order to offer a competitive solution in these areas, IEEE 802.11 standardization group has defined the "ah" amendment, the first sub-1GHz WLAN standard, with flexible channel bandwidths, starting from 1MHz, up to 16MHz, and many other physical and link layer improvements, enabling long-range and energy efficient communications. However, for some regions, like Europe, the maximum transmitted power in dedicated frequency band is limited to only 10mW, thus disabling the achievement of ranges which would be close to targeted of up to 1km. In this paper we examine possibilities for range extension through implementation of half-duplex decode-and forward (DF) relay station (RS) in communication between an access point (AP) and an end-station (ST). Assuming a Rician fading channel between AP and RS, and a Rayleigh fading channel on the RS-ST link, we analytically derive results on achievable ranges for the most robust modulation and coding schemes (MCSs). Analyses are performed for two different standard adopted deployment scenarios on the RS-ST link. Moreover, we have analyzed whether the considered most robust MCSs, known for supporting the longest range, but the lowest data rates, can meet the defined requirement of at least 100kb/s for the greatest attainable AP-RS-ST distances. We examine data rate enhancements, brought by coding and using of short packets, for both downlink (DL) and uplink (UL). Finally, we present bit error rate (BER) results, obtained through simulations of a dual-hop DF IEEE 802.11ah relay system for the considered MCs. All presented results confirm that IEEE 802.11ah systems, through deployment of relay stations, become an interesting solution for M2M and IoT based applications.
Range Extension in IEEE 802.11ah Systems Through Relaying
RW Triangulum (RW Tri) is a 13th magnitude Nova-like Cataclysmic Variable star with an orbital period of 0.2319 days (5.56 hours). Infrared observations of RW Tri indicate that its secondary is most likely a late K-dwarf. Past analyses predicted a distance of 270 parsec, derived from a black-body fit to spectrum of the central part of the disk. Recently completed Hubble Space Telescope Fine Guidance Sensor interferometric observations allow us to determine the first trigonometric parallax to RW Tri. This determination puts the distance of RW Tri at 341, one of the most distant objects with a direct parallax measurement.
Astrometry with Hubble Space Telescope Fine Guidance Sensor 3: The Parallax of the Cataclysmic Variable RW Triangulum
We are considering the geometric amoebot model where a set of $n$ amoebots is placed on the triangular grid. An amoebot is able to send information to its neighbors, and to move via expansions and contractions. Since amoebots and information can only travel node by node, most problems have a natural lower bound of $\Omega(D)$ where $D$ denotes the diameter of the structure. Inspired by the nervous and muscular system, Feldmann et al. have proposed the reconfigurable circuit extension and the joint movement extension of the amoebot model with the goal of breaking this lower bound. In the joint movement extension, the way amoebots move is altered. Amoebots become able to push and pull other amoebots. Feldmann et al. demonstrated the power of joint movements by transforming a line of amoebots into a rhombus within $O(\log n)$ rounds. However, they left the details of the extension open. The goal of this paper is therefore to formalize and extend the joint movement extension. In order to provide a proof of concept for the extension, we consider two fundamental problems of modular robot systems: shape formation and locomotion. We approach these problems by defining meta-modules of rhombical and hexagonal shape, respectively. The meta-modules are capable of movement primitives like sliding, rotating, and tunneling. This allows us to simulate shape formation algorithms of various modular robot systems. Finally, we construct three amoebot structures capable of locomotion by rolling, crawling, and walking, respectively.
Shape Formation and Locomotion with Joint Movements in the Amoebot Model
In this work we provide algorithmic solutions to five fundamental problems concerning the verification, synthesis and correction of concurrent systems that can be modeled by bounded p/t-nets. We express concurrency via partial orders and assume that behavioral specifications are given via monadic second order logic. A c-partial-order is a partial order whose Hasse diagram can be covered by c paths. For a finite set T of transitions, we let P(c,T,\phi) denote the set of all T-labelled c-partial-orders satisfying \phi. If N=(P,T) is a p/t-net we let P(N,c) denote the set of all c-partially-ordered runs of N. A (b, r)-bounded p/t-net is a b-bounded p/t-net in which each place appears repeated at most r times. We solve the following problems: 1. Verification: given an MSO formula \phi and a bounded p/t-net N determine whether P(N,c)\subseteq P(c,T,\phi), whether P(c,T,\phi)\subseteq P(N,c), or whether P(N,c)\cap P(c,T,\phi)=\emptyset. 2. Synthesis from MSO Specifications: given an MSO formula \phi, synthesize a semantically minimal (b,r)-bounded p/t-net N satisfying P(c,T,\phi)\subseteq P(N, c). 3. Semantically Safest Subsystem: given an MSO formula \phi defining a set of safe partial orders, and a b-bounded p/t-net N, possibly containing unsafe behaviors, synthesize the safest (b,r)-bounded p/t-net N' whose behavior lies in between P(N,c)\cap P(c,T,\phi) and P(N,c). 4. Behavioral Repair: given two MSO formulas \phi and \psi, and a b-bounded p/t-net N, synthesize a semantically minimal (b,r)-bounded p/t net N' whose behavior lies in between P(N,c) \cap P(c,T,\phi) and P(c,T,\psi). 5. Synthesis from Contracts: given an MSO formula \phi^yes specifying a set of good behaviors and an MSO formula \phi^no specifying a set of bad behaviors, synthesize a semantically minimal (b,r)-bounded p/t-net N such that P(c,T,\phi^yes) \subseteq P(N,c) but P(c,T,\phi^no ) \cap P(N,c)=\emptyset.
Automated Verification, Synthesis and Correction of Concurrent Systems via MSO Logic
We study Private Information Retrieval with Side Information (PIR-SI) in the single-server multi-message setting. In this setting, a user wants to download $D$ messages from a database of $K\geq D$ messages, stored on a single server, without revealing any information about the identities of the demanded messages to the server. The goal of the user is to achieve information-theoretic privacy by leveraging the side information about the database. The side information consists of a random subset of $M$ messages in the database which could have been obtained in advance from other users or from previous interactions with the server. The identities of the messages forming the side information are initially unknown to the server. Our goal is to characterize the capacity of this setting, i.e., the maximum achievable download rate. In our previous work, we have established the PIR-SI capacity for the special case in which the user wants a single message, i.e., $D=1$ and showed that the capacity can be achieved through the Partition and Code (PC) scheme. In this paper, we focus on the case when the user wants multiple messages, i.e., $D>1$. Our first result is that if the user wants more messages than what they have as side information, i.e., $D>M$, then the capacity is $\frac{D}{K-M}$, and it can be achieved using a scheme based on the Generalized Reed-Solomon (GRS) codes. In this case, the user must learn all the messages in the database in order to obtain the desired messages. Our second result shows that this may not be necessary when $D\leq M$, and the capacity in this case can be higher. We present a lower bound on the capacity based on an achievability scheme which we call Generalized Partition and Code (GPC).
On the Capacity of Single-Server Multi-Message Private Information Retrieval with Side Information
With their increasing size, large language models (LLMs) are becoming increasingly good at language understanding tasks. But even with high performance on specific downstream task, LLMs fail at simple linguistic tests for negation or quantifier understanding. Previous work on quantifier understanding in LLMs show inverse scaling in understanding few-type quantifiers. In this paper, we question the claims of of previous work and show that it is a result of inappropriate testing methodology. We also present alternate methods to measure quantifier comprehension in LLMs and show that LLMs are able to better understand the difference between the meaning of few-type and most-type quantifiers as their size increases, although they are not particularly good at it. We also observe inverse scaling for most-type quantifier understanding, which is contrary to human psycho-linguistic experiments and previous work, where the model's understanding of most-type quantifier gets worse as the model size increases. We do this evaluation on models ranging from 125M-175B parameters, which suggests that LLMs do not do as well as expected with quantifiers. We also discuss the possible reasons for this and the relevance of quantifier understanding in evaluating language understanding in LLMs.
Probing Quantifier Comprehension in Large Language Models: Another Example of Inverse Scaling
We present ALMA band 7 (345 GHz) continuum and $^{12}$CO(J = 3-2) observations of the circumstellar disk surrounding HD141569. At an age of about 5 Myr, the disk has a complex morphology that may be best interpreted as a nascent debris system with gas. Our $870\rm~\mu m$ ALMA continuum observations resolve a dust disk out to approximately $ 56 ~\rm au$ from the star (assuming a distance of 116 pc) with $0."38$ resolution and $0.07 ~ \rm mJy~beam^{-1}$ sensitivity. We measure a continuum flux density for this inner material of $3.8 \pm 0.4 ~ \rm mJy$ (including calibration uncertainties). The $^{12}$CO(3-2) gas is resolved kinematically and spatially from about 30 to 210 au. The integrated $^{12}$CO(3-2) line flux density is $15.7 \pm 1.6~\rm Jy~km~s^{-1}$. We estimate the mass of the millimeter debris and $^{12}$CO(3-2) gas to be $\gtrsim0.04~\rm M_{\oplus}$ and $\sim2\times 10^{-3}~\rm M_{\oplus}$, respectively. If the millimeter grains are part of a collisional cascade, then we infer that the inner disk ($<50$ au) has $\sim 160~\rm M_{\oplus}$ contained within objects less than 50 km in radius, depending on the planetesimal size distribution and density assumptions. MCMC modeling of the system reveals a disk morphology with an inclination of $53.4^{\circ}$ centered around a $\rm M=2.39~ M_{\odot}$ host star ($\rm Msin(i)=1.92~ M_{\odot}$). We discuss whether the gas in HD141569's disk may be second generation. If it is, the system can be used to study the clearing stages of planet formation.
ALMA Observations of HD141569's Circumstellar Disk
Research in adversarial learning follows a cat and mouse game between attackers and defenders where attacks are proposed, they are mitigated by new defenses, and subsequently new attacks are proposed that break earlier defenses, and so on. However, it has remained unclear as to whether there are conditions under which no better attacks or defenses can be proposed. In this paper, we propose a game-theoretic framework for studying attacks and defenses which exist in equilibrium. Under a locally linear decision boundary model for the underlying binary classifier, we prove that the Fast Gradient Method attack and the Randomized Smoothing defense form a Nash Equilibrium. We then show how this equilibrium defense can be approximated given finitely many samples from a data-generating distribution, and derive a generalization bound for the performance of our approximation.
A Game Theoretic Analysis of Additive Adversarial Attacks and Defenses
This paper introduces a string-based extension of the Borsuk-Ulam Theorem (denoted by strBUT). A string is a region with zero width and either bounded or unbounded length on the surface of an $n$-sphere or a region of a normed linear space. In this work, an $n$-sphere surface is covered by a collection of strings. For a strongly proximal continuous function on an $n$-sphere into $n$-dimensional Euclidean space, there exists a pair of antipodal $n$-sphere strings with matching descriptions that map into Euclidean space $\mathbb{R}^n$. Each region $M$ of a string-covered $n$-sphere is a worldsheet. For a strongly proximal continuous mapping from a worldsheet-covered $n$-sphere to $\mathbb{R}^n$, strongly near antipodal worldsheets map into the same region in $\mathbb{R}^n$. This leads to a wired friend theorem in descriptive string theory. An application of strBUT is given in terms of the evaluation of Electroencephalography (EEG) patterns.
Region-Based Borsuk-Ulam Theorem and Wired Friend Theorem
We present a detailed spectroscopic study of the optical counterpart of the neutron star X-ray transient Aquila X-1 during its 2011, 2013 and 2016 outbursts. We use 65 intermediate resolution GTC-10.4m spectra with the aim of detecting irradiation-induced Bowen blend emission from the donor star. While Gaussian fitting does not yield conclusive results, our full phase coverage allows us to exploit Doppler mapping techniques to independently constrain the donor star radial velocity. By using the component N III 4640.64/4641.84 A we measure Kem = 102 +-6 km/s. This highly significant detection ( >13 sigma) is fully compatible with the true companion star radial velocity obtained from near-infrared spectroscopy during quiescence. Combining these two velocities we determine, for the first time, the accretion disc opening angle and its associated error from direct spectroscopic measurements and detailed modelling, obtaining alpha = 15.5 +2.5 -5 deg. This value is consistent with theoretical work if significant X-ray irradiation is taken into account and is important in the light of recent observations of GX339-4, where discrepant results were obtained between the donor's intrinsic radial velocity and the Bowen-inferred value. We also discuss the limitations of the Bowen technique when complete phase coverage is not available.
Bowen emission from Aquila X-1: evidence for multiple components and constraint on the accretion disc vertical structure
The use of unitary invariant subspaces of a Hilbert space $\mathcal{H}$ is nowadays a recognized fact in the treatment of sampling problems. Indeed, shift-invariant subspaces of $L^2(\mathbb{R})$ and also periodic extensions of finite signals are remarkable examples where this occurs. As a consequence, the availability of an abstract unitary sampling theory becomes a useful tool to handle these problems. In this paper we derive a sampling theory for tensor products of unitary invariant subspaces. This allows to merge the cases of finitely/infinitely generated unitary invariant subspaces formerly studied in the mathematical literature, it also allows to introduce the several variables case. As the involved samples are identified as frame coefficients in suitable tensor product spaces, the relevant mathematical technique is that of frame theory, involving both, finite/infinite dimensional cases.
Modeling sampling in tensor products of unitary invariant subspaces
The effect of a cosmological constant on the precession of the line of apsides is O(\Lambda c^2 r^3/GM) which is 3(H_\circ P)^2/8\pi^2 \approx 10^{-23} for a vacuum-dominated Universe with Hubble constant H_\circ = 65 km/sec/Mpc and for the orbital period P = 88 days of Mercury. This is unmeasurably small, so planetary perturbations cannot be used to limit the cosmological constant, contrary to the suggestion by Cardona & Tejeiro (1998).
Interplanetary Measures Can Not Bound the Cosmological Constant
Optically pumped atomic magnetometers (OPMs) offer highly sensitive magnetic measurements using compact hardware, offering new possibilities for practical precision sensors. Double-resonance OPM operation is well suited to unshielded magnetometry, due to high sensor dynamic range. However, sensor response is highly anisotropic with variation in the orientation of the magnetic field. We present data quantifying these effects and discuss implications for the design of practical sensors.
Optically Pumped Magnetometry in Arbitrarily Oriented Magnetic Fields
We present the Theorem Prover Museum, and initiative to conserve -- and make publicly available -- the sources and source-related artefacts of automated reasoning systems. Theorem provers have been at the forefront of Artificial Intelligence, stretching the limits of computation, and incubating many innovations we take for granted today. Without the systems themselves as preserved cultural artefacts, future historians will have difficulties to study the history of science and engineering in our discipline.
The Theorem Prover Museum -- Conserving the System Heritage of Automated Reasoning
We study spin-1/2 Heisenberg XXX antiferromagnet. The spectrum of the Hamiltonian was found by Hans Bethe in 1931. We study the probability of formation of ferromagnetic string in the antiferromagnetic ground state, which we call emptiness formation probability P(n). This is the most fundamental correlation function. We prove that for the short strings it can be expressed in terms of the Riemann zeta function with odd arguments, logarithm ln 2 and rational coefficients. This adds yet another link between statistical mechanics and number theory. We have obtained an analytical formula for P(5) for the first time. We have also calculated P(n) numerically by the Density Matrix Renormalization Group. The results agree quite well with the analytical ones. Furthermore we study asymptotic behavior of P(n) at finite temperature by Quantum Monte-Carlo simulation. It also agrees with our previous analytical results.
Quantum Correlations and Number Theory
Multivariate time series classification (MTSC) is an important data mining task, which can be effectively solved by popular deep learning technology. Unfortunately, the existing deep learning-based methods neglect the hidden dependencies in different dimensions and also rarely consider the unique dynamic features of time series, which lack sufficient feature extraction capability to obtain satisfactory classification accuracy. To address this problem, we propose a novel temporal dynamic graph neural network (TodyNet) that can extract hidden spatio-temporal dependencies without undefined graph structure. It enables information flow among isolated but implicit interdependent variables and captures the associations between different time slots by dynamic graph mechanism, which further improves the classification performance of the model. Meanwhile, the hierarchical representations of graphs cannot be learned due to the limitation of GNNs. Thus, we also design a temporal graph pooling layer to obtain a global graph-level representation for graph learning with learnable temporal parameters. The dynamic graph, graph information propagation, and temporal convolution are jointly learned in an end-to-end framework. The experiments on 26 UEA benchmark datasets illustrate that the proposed TodyNet outperforms existing deep learning-based methods in the MTSC tasks.
TodyNet: Temporal Dynamic Graph Neural Network for Multivariate Time Series Classification
Magnetic fields in extragalactic space between galaxy clusters may induce conversions between photons and axion-like particles (ALPs), thereby shielding the photons from absorption on the extragalactic background light. For TeV gamma rays, the oscillation length ($l_{\rm osc}$) of the photon-ALP system becomes inevitably of the same order as the coherence length of the magnetic field ($l$) and the length over which the field changes significantly (transition length $l_{\rm t}$) due to refraction on background photons. We derive exact statistical evolution equations for the mean and variance of the photon and ALP transfer functions in the non-adiabatic regime ($l_{\rm osc} \sim l \gg l_{\rm t}$). We also make analytical predictions for the transfer functions in the quasi-adiabatic regime ($l_{\rm osc} \ll l, l_{\rm t}$). Our results are important in light of the upcoming Cherenkov Telescope Array (CTA), and may also be applied to models with non-zero ALP masses.
Extragalactic photon-ALP conversion at CTA energies
Fine-tuning (via methods such as instruction-tuning or reinforcement learning from human feedback) is a crucial step in training language models to robustly carry out tasks of interest. However, we lack a systematic understanding of the effects of fine-tuning, particularly on tasks outside the narrow fine-tuning distribution. In a simplified scenario, we demonstrate that improving performance on tasks within the fine-tuning data distribution comes at the expense of suppressing model capabilities on other tasks. This degradation is especially pronounced for tasks "closest" to the fine-tuning distribution. We hypothesize that language models implicitly infer the task of the prompt corresponds, and the fine-tuning process predominantly skews this task inference towards tasks in the fine-tuning distribution. To test this hypothesis, we propose Conjugate Prompting to see if we can recover pretrained capabilities. Conjugate prompting artificially makes the task look farther from the fine-tuning distribution while requiring the same capability. We find that conjugate prompting systematically recovers some of the pretraining capabilities on our synthetic setup. We then apply conjugate prompting to real-world LLMs using the observation that fine-tuning distributions are typically heavily skewed towards English. We find that simply translating the prompts to different languages can cause the fine-tuned models to respond like their pretrained counterparts instead. This allows us to recover the in-context learning abilities lost via instruction tuning, and more concerningly, to recover harmful content generation suppressed by safety fine-tuning in chatbots like ChatGPT.
Understanding Catastrophic Forgetting in Language Models via Implicit Inference
Statistical behavior of a classical $\phi ^{4}$ Hamiltonian lattice is investigated from microscopic dynamics. The largest Lyapunov exponent and entropies are considered for manifesting chaos and equipartition behaviors of the system. It is found, for the first time, that for any large while finite system size there exist two critical couplings for the transitions to equipartitions, and the scaling behaviors of these lower and upper critical couplings vs the system size is numerically obtained.
Transitions to equilibrium state in classical \phi ^{4} lattice
In the quest for the formation and evolution of galaxy clusters, Rakos and co-workers introduced a spectrophotometric method using the modified Str\"omgren photometry. But with the considerable debate toward the project's abilities, we re-introduce the system after a thorough testing of repeatability of colors and reproducibility of the ages and metallicities for six common galaxies in the three A779 data sets. A fair agreement has been found between the modified Str\"omgren and Str\"omgren filter systems to produce similar colors (with the precision of 0.09 mag in (uz-vz), 0.02 mag in (bz-yz), and 0.03 mag in (vz-vz)), ages and metallicities (with the uncertainty of 0.36 Gyr and 0.04 dex from the PCA and 0.44 Gyr and 0.2 dex using the GALEV models). We infer that the technique is able to relieve the age-metallicity degeneracy by separating the age effects from the metallicity effects, but still unable to completely break. We further extend this paper to re-study the evolution of galaxies in the low mass, dynamically poor A779 cluster by correlating the luminosity (mass), density, radial distance with the estimated age, metallicity, and the star formation history. Our results distinctly show the bimodality of the young, low-mass, metal-poor population with the mean age of 6.7 Gyr (\pm 0.5 Gyr) and the old, high-mass, metal-rich galaxies with the mean age of 9 Gyr (\pm 0.5 Gyr). The method also observes the color evolution of the blue cluster galaxies to red, and the downsizing phenomenon. Our analysis shows that the modified Str\"omgren photometry is very well suited for studying low- and intermediate-z clusters, as it is capable of observing deeper with better spatial resolution at spectroscopic redshift limits, and the narrowband filters estimate the age and metallicity with lesser uncertainties compared to other methods that study stellar population scenarios.
Ages and Metallicities of Cluster Galaxies in A779 using Modified Str\"omgren Photometry
Fundamental understanding of ionic transport at the nanoscale is essential for developing biosensors based on nanopore technology and new generation high-performance nanofiltration membranes for separation and purification applications. We study here ionic transport through single putatively neutral hydrophobic nanopores with high aspect ratio (of length L=6 \mu m with diameters ranging from 1 to 10 nm) and with a well controlled cylindrical geometry. We develop a detailed hybrid mesoscopic theoretical approach for the electrolyte conductivity inside nanopores, which considers explicitly ion advection by electro-osmotic flow and possible flow slip at the pore surface. By fitting the experimental conductance data we show that for nanopore diameters greater than 4 nm a constant weak surface charge density of about 10$^{-2}$ C m$^{-2}$ needs to be incorporated in the model to account for conductance plateaus of a few pico-Siemens at low salt concentrations. For tighter nanopores, our analysis leads to a higher surface charge density, which can be attributed to a modification of ion solvation structure close to the pore surface, as observed in the molecular dynamics simulations we performed.
Ionic transport through sub-10 nm diameter hydrophobic high-aspect ratio nanopores: experiment, theory and simulation
We investigate the dynamics of a single semiflexible filament, under the action of a compressing force, using numerical simulations and scaling arguments. The force is applied along the end to end vector at one extremity of the filament, while the other end is held fixed. We find that, unlike in elastic rods the filament folds asymmetrically with a folding length which depends only on the bending stiffness and the applied force. It is shown that this behavior can be attributed to the exponentially falling tension profile in the filament. While the folding time depends on the initial configuration, at late time, the distance moved by the terminal point of the filament and the length of the fold shows a power law dependence on time with an exponent 1/2.
Dynamics of folding in Semiflexible filaments
This paper describes the software implementation of genetic algorithm for identifying and selecting most relevant results received during sequentially executed subject search operations. Simulated evolutionary process generates sustainable and effective population of search queries, forms search pattern of documents or semantic core, creates relevant sets of required documents, allows automatic classification of search results. The paper discusses the features of subject search, justifies the use of a genetic algorithm, describes arguments of the fitness function and describes basic steps and parameters of the algorithm.
Genetic algorithm implementation for effective document subject search
To study the electrochemical reaction on surfaces, phase interfaces, and crack surfaces in the lithium ion battery electrode particles, a phase-field model is developed, which describes fracture in large strains and anisotropic Cahn-Hilliard-Reaction. Thereby the concentration-dependency of the elastic properties and the anisotropy of diffusivity are also considered. The implementation in 3D is carried out by isogeometric finite element methods in order to treat the high order terms in a straightforward sense. The electrochemical reaction is modeled through a modified Butler-Volmer equation to account for the influence of the phase change on the reaction on exterior surfaces. The reaction on the crack surfaces is considered through a volume source term weighted by a term related to the fracture order parameter. Based on the model, three characteristic examples are considered to reveal the electrochemical reactions on particle surfaces, phase interfaces, and crack surfaces, as well as their influence on the particle material behavior. Results show that the ratio between the timescale of reaction and the diffusion can have a significant influence on phase segregation behavior, as well as the anisotropy of diffusivity. In turn, the distribution of the lithium concentration greatly influences the reaction on the surface, especially when the phase interfaces appear on exterior surfaces or crack surfaces. The reaction rate increases considerably at phase interfaces, due to the large lithium concentration gradient. Moreover, simulations demonstrate that the segregation of a Li-rich and a Li-poor phase during delithiation can drive the cracks to propagate. Results indicate that the model can capture the electrochemical reaction on the freshly cracked surfaces.
Phase-field study of electrochemical reactions at exterior and interior interfaces in Li-Ion battery electrode particles
The primary purpose of dialogue state tracking (DST), a critical component of an end-to-end conversational system, is to build a model that responds well to real-world situations. Although we often change our minds from time to time during ordinary conversations, current benchmark datasets do not adequately reflect such occurrences and instead consist of over-simplified conversations, in which no one changes their mind during a conversation. As the main question inspiring the present study, "Are current benchmark datasets sufficiently diverse to handle casual conversations in which one changes their mind after a certain topic is over?" We found that the answer is "No" because DST models cannot refer to previous user preferences when template-based turnback utterances are injected into the dataset. Even in the the simplest mind-changing (turnback) scenario, the performance of DST models significantly degenerated. However, we found that this performance degeneration can be recovered when the turnback scenarios are explicitly designed in the training set, implying that the problem is not with the DST models but rather with the construction of the benchmark dataset.
Oh My Mistake!: Toward Realistic Dialogue State Tracking including Turnback Utterances
A periodic plasmonic meta-material was studied using finite-difference time domain (FDTD) method to investigate the influence of neighboring particles on the near unity optical absorptivity. The meta-material was constructed as a silver nanoparticle (20-90nm) situated above an alumina (Al$_2$O$_3$) dielectric environment. A full parametric sweep of the particle width and the dielectric thickness was conducted. Computational results identified several resonances between the metal-dielectric and metal-air that have potential to broadening the response through stacked geometry. A significant coupled resonance between the metal-dielectric resonance and a cavity resonance between particles was capture as a function of dielectric thickness. This coupled resonance was not evident below dielectric thicknesses of 40nm and above cavity widths of 20nm. Additionally, a noticeable propagating surface plasmon polariton resonance was predicted when the particle width was half the unit cell length.
Neighboring Interactions in a Periodic Plasmonic Material for Solar-Thermal Energy Conversion
We introduce and study the entanglement breaking rank of an entanglement breaking channel. We show that the entanglement breaking rank of the channel $\mathfrak Z: M_d \to M_d$ defined by \begin{align*} \mathfrak Z(X) = \frac{1}{d+1}(X+\text{Tr}(X)\mathbb I_d) \end{align*} is $d^2$ if and only if there exists a symmetric informationally-complete POVM in dimension $d$.
Entanglement Breaking Rank and the existence of SIC POVMs
Using the embedded gradient vector field method (see P. Birtea, D. Comanescu, Hessian operators on constraint manifolds, J. Nonlinear Science 25, 2015), we present a general formula for the Laplace-Beltrami operator defined on a constraint manifold, written in the ambient coordinates. Regarding the orthogonal group as a constraint submanifold of the Euclidean space of $n\times n$ matrices, we give an explicit formula for the Laplace-Beltrami operator on the orthogonal group using the ambient Euclidean coordinates. We apply this new formula for some relevant functions.
Laplace-Beltrami operator on the orthogonal group in Cartesian coordinates
We investigate molecular evolution from a molecular cloud core to a first hydrostatic core in three spatial dimensions. We perform a radiation hydrodynamic simulation in order to trace fluid parcels, in which molecular evolution is investigated, using a gas-phase and grain-surface chemical reaction network. We derive spatial distributions of molecular abundances and column densities in the core harboring the first core. We find that the total of gas and ice abundances of many species in a cold era (10 K) remain unaltered until the temperature reaches ~500 K. The gas abundances in the warm envelope and the outer layer of the first core (T < 500 K) are mainly determined via the sublimation of ice-mantle species. Above 500 K, the abundant molecules, such as H2CO, start to be destroyed, and simple molecules, such as CO, H2O and N2 are reformed. On the other hand, some molecules are effectively formed at high temperature; carbon-chains, such as C2H2 and cyanopolyynes, are formed at the temperature of >700 K. We also find that large organic molecules, such as CH3OH and HCOOCH3, are associated with the first core (r < 10 AU). Although the abundances of these molecules in the first core stage are comparable or less than in the protostellar stage (hot corino), reflecting the lower luminosity of the central object, their column densities in our model are comparable to the observed values toward the prototypical hot corino, IRAS 16293-2422. We propose that these large organic molecules can be good tracers of the first cores.
Chemistry in the First Hydrostatic Core Stage By Adopting Three-Dimensional Radiation Hydrodynamic Simulations
XY pyrochlore antiferromagnets are well-known to exhibit order-by-disorder through both quantum and thermal selection. In this paper we consider the effect of substituting non-magnetic ions onto the magnetic sites in a pyrochlore XY model with generally anisotropic exchange tuned by a single parameter $J^{\pm\pm}/J^\pm$. The physics is controlled by two points in this space of parameters $J^{\pm\pm}/J^\pm=\pm 2$ at which there are line modes in the ground state and hence an $O(L^2)$ ground state degeneracy intermediate between that of a conventional magnet and a Coulomb phase. At each of these points, single vacancies seed pairs of line defects. Two line defects carrying incompatible spin configurations from different vacancies can cross leading to an effective one-dimensional description of the resulting spin texture. In the thermodynamic limit at finite density, we find that dilution selects a state "opposite" to the state selected by thermal and quantum disorder which is understood from the single vacancy limit. The latter finding hints at the possibility that Er$_{2-x}$Y$_x$Ti$_2$O$_7$ for small $x$ exhibits a second phase transition within the thermally selected $\psi_2$ state into a $\psi_3$ state selected by the quenched disorder.
Order Induced by Dilution in Pyrochlore XY Antiferromagnets
Unstoppable feedback loops and tipping points in socio-ecological systems are the main threats to sustainability. These behaviors have been extensively studied, notably to predict, and arguably deviate, dead-end trajectories. Behind the apparent complexity of such interaction networks, systems analysts have identified a small group of repeated patterns in all systems, called archetypes. For instance, the archetype of escalation is made of two positive feedback loops fueling one another and is prevalent when competition arises, as in arms race for instance. Interestingly, none of the known archetypes provide sustainability: they all trigger endless amplification. In parallel, in systems biology, there has been considerable advances on incoherent loops in molecular networks in the past 20 years. Such patterns in biological networks produce stability and a form on intrinsic autonomy for all functions, from circadian rhythm to immunity. Incoherence is the fuel of homeostasis of living systems. Here, I bridge both conclusions and propose that incoherence should be considered as a new operational archetype buffering socio-ecological fluctuations. This proposition is supported by the well-known trade-off between robustness and efficiency: adaptability requires some degree of incoherence. This applies to both technical and social systems: incoherent strategies recognize and fuel the diversity of solutions; they are the essential, yet often ignored, components of cooperation. Building on these theoretical considerations and real-life examples, incoherence might offer a counterintuitive, but transformative, way out of the great acceleration, and possibly, an actionable lever for decision makers.
Is incoherence required for sustainability?
A new generalisation of Goldbach's conjecture (GGC) - also generalising that of Lemoine - is tested, introduced by the first author. It states that for every pair of positive integers $m_1, m_2$, every sufficiently large integer $n$ satisfying certain simple criteria can be expressed as $n=m_1p+m_2q$ for some primes $p$ and $q$. GGC is checked up to $10^{12}d$ for all (up to $10^{13}d$ for some) pairs of coefficients $m_1, m_2$, where $d=\gcd{(m_1, m_2)}$ and $m_1/d, m_2/d \leq 40$. The largest counterexamples found that cannot be obtained in this form are presented. Their relatively small sizes support the plausibility of GGC. Lemoine's conjecture is verified up to a new record of $10^{13}$. Four naturally arising verifying algorithms are described and their running times compared for every $m_1\leq m_2\leq 40$ relatively prime. These seek to find either the $p$- or the $q$-minimal $(m_1, m_2)$-partitions of all numbers tested, by either descending or ascending search for the prime to be maximised or minimised, respectively, in the partitions. For all $m_1, m_2$ descending searches were faster than ascending ones. A heuristic explanation is provided. The relative speed of ascending [descending] searches for the $p$- and for the $q$-minimal partitions, respectively, varied by $m_1, m_2$. Using the average of $p^*_{m_1, m_2}(n)$ - the minimal $p$ in all $(m_1, m_2)$-partitions of $n$ - up to a sufficiently large threshold, two functions of $m_1, m_2$ are introduced, which may help predict these rankings. Our predictions correspond well with actual rankings. These could be further improved by developing approximations to $p^*_{m_1, m_2}(n)$. Numerical data are presented, including average and maximum values of $p^*_{m_1, m_2}(n)$ up to $10^9$. An extension of GGC is proposed, generalising the Twin prime conjecture and the assertion that there are infinitely many Sophie Germain primes.
Empirical verification of a new generalisation of Goldbach's conjecture up to $10^{12}$ (or $10^{13}$) for all coefficients $\leq 40$